Sample records for multiple randomly distributed

  1. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  2. Critical side channel effects in random bit generation with multiple semiconductor lasers in a polarization-based quantum key distribution system.

    PubMed

    Ko, Heasin; Choi, Byung-Seok; Choe, Joong-Seon; Kim, Kap-Joong; Kim, Jong-Hoi; Youn, Chun Ju

    2017-08-21

    Most polarization-based BB84 quantum key distribution (QKD) systems utilize multiple lasers to generate one of four polarization quantum states randomly. However, random bit generation with multiple lasers can potentially open critical side channels that significantly endangers the security of QKD systems. In this paper, we show unnoticed side channels of temporal disparity and intensity fluctuation, which possibly exist in the operation of multiple semiconductor laser diodes. Experimental results show that the side channels can enormously degrade security performance of QKD systems. An important system issue for the improvement of quantum bit error rate (QBER) related with laser driving condition is further addressed with experimental results.

  3. Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.

    PubMed

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2013-02-01

    In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.

  4. Weighted Scaling in Non-growth Random Networks

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li

    2012-09-01

    We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.

  5. A multiple scattering theory for EM wave propagation in a dense random medium

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.; Wong, K. W.

    1985-01-01

    For a dense medium of randomly distributed scatterers an integral formulation for the total coherent field has been developed. This formulation accounts for the multiple scattering of electromagnetic waves including both the twoand three-particle terms. It is shown that under the Markovian assumption the total coherent field and the effective field have the same effective wave number. As an illustration of this theory, the effective wave number and the extinction coefficient are derived in terms of the polarizability tensor and the pair distribution function for randomly distributed small spherical scatterers. It is found that the contribution of the three-particle term increases with the particle size, the volume fraction, the frequency and the permittivity of the particle. This increase is more significant with frequency and particle size than with other parameters.

  6. Random matrices with external source and the asymptotic behaviour of multiple orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aptekarev, Alexander I; Lysov, Vladimir G; Tulyakov, Dmitrii N

    2011-02-28

    Ensembles of random Hermitian matrices with a distribution measure defined by an anharmonic potential perturbed by an external source are considered. The limiting characteristics of the eigenvalue distribution of the matrices in these ensembles are related to the asymptotic behaviour of a certain system of multiple orthogonal polynomials. Strong asymptotic formulae are derived for this system. As a consequence, for matrices in this ensemble the limit mean eigenvalue density is found, and a variational principle is proposed to characterize this density. Bibliography: 35 titles.

  7. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    PubMed

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  8. Cooperation evolution in random multiplicative environments

    NASA Astrophysics Data System (ADS)

    Yaari, G.; Solomon, S.

    2010-02-01

    Most real life systems have a random component: the multitude of endogenous and exogenous factors influencing them result in stochastic fluctuations of the parameters determining their dynamics. These empirical systems are in many cases subject to noise of multiplicative nature. The special properties of multiplicative noise as opposed to additive noise have been noticed for a long while. Even though apparently and formally the difference between free additive vs. multiplicative random walks consists in just a move from normal to log-normal distributions, in practice the implications are much more far reaching. While in an additive context the emergence and survival of cooperation requires special conditions (especially some level of reward, punishment, reciprocity), we find that in the multiplicative random context the emergence of cooperation is much more natural and effective. We study the various implications of this observation and its applications in various contexts.

  9. Turbulence hierarchy in a random fibre laser

    PubMed Central

    González, Iván R. Roa; Lima, Bismarck C.; Pincheira, Pablo I. R.; Brum, Arthur A.; Macêdo, Antônio M. S.; Vasconcelos, Giovani L.; de S. Menezes, Leonardo; Raposo, Ernesto P.; Gomes, Anderson S. L.; Kashyap, Raman

    2017-01-01

    Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov’s theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform. PMID:28561064

  10. Research on Some Bus Transport Networks with Random Overlapping Clique Structure

    NASA Astrophysics Data System (ADS)

    Yang, Xu-Hua; Wang, Bo; Wang, Wan-Liang; Sun, You-Xian

    2008-11-01

    On the basis of investigating the statistical data of bus transport networks of three big cities in China, we propose that each bus route is a clique (maximal complete subgraph) and a bus transport network (BTN) consists of a lot of cliques, which intensively connect and overlap with each other. We study the network properties, which include the degree distribution, multiple edges' overlapping time distribution, distribution of the overlap size between any two overlapping cliques, distribution of the number of cliques that a node belongs to. Naturally, the cliques also constitute a network, with the overlapping nodes being their multiple links. We also research its network properties such as degree distribution, clustering, average path length, and so on. We propose that a BTN has the properties of random clique increment and random overlapping clique, at the same time, a BTN is a small-world network with highly clique-clustered and highly clique-overlapped. Finally, we introduce a BTN evolution model, whose simulation results agree well with the statistical laws that emerge in real BTNs.

  11. Multiwavelength generation in a random distributed feedback fiber laser using an all fiber Lyot filter.

    PubMed

    Sugavanam, S; Yan, Z; Kamynin, V; Kurkov, A S; Zhang, L; Churkin, D V

    2014-02-10

    Multiwavelength lasing in the random distributed feedback fiber laser is demonstrated by employing an all fiber Lyot filter. Stable multiwavelength generation is obtained, with each line exhibiting sub-nanometer line-widths. A flat power distribution over multiple lines is obtained, which indicates that the power between lines is redistributed in nonlinear mixing processes. The multiwavelength generation is observed both in first and second Stokes waves.

  12. On the generation of log-Lévy distributions and extreme randomness

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2011-10-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.

  13. Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick

    2018-05-01

    For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.

  14. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  15. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  16. Gravitational lensing by an ensemble of isothermal galaxies

    NASA Technical Reports Server (NTRS)

    Katz, Neal; Paczynski, Bohdan

    1987-01-01

    Calculation of 28,000 models of gravitational lensing of a distant quasar by an ensemble of randomly placed galaxies, each having a singular isothermal mass distribuiton, is reported. The average surface mass density was 0.2 of the critical value in all models. It is found that the surface mass density averaged over the area of the smallest circle that encompasses the multiple images is 0.82, only slightly smaller than expected from a simple analytical model of Turner et al. (1984). The probability of getting multiple images is also as large as expected analytically. Gravitational lensing is dominated by the matter in the beam; i.e., by the beam convergence. The cases where the multiple imaging is due to asymmetry in mass distribution (i.e., due to shear) are very rare. Therefore, the observed gravitational-lens candidates for which no lensing object has been detected between the images cannot be a result of asymmetric mass distribution outside the images, at least in a model with randomly distributed galaxies. A surprisingly large number of large separations between the multiple images is found: up to 25 percent of multiple images have their angular separation 2 to 4 times larger than expected in a simple analytical model.

  17. Multiplicative processes in visual cognition

    NASA Astrophysics Data System (ADS)

    Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.

    2014-03-01

    The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.

  18. Stationary Random Metrics on Hierarchical Graphs Via {(min,+)}-type Recursive Distributional Equations

    NASA Astrophysics Data System (ADS)

    Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele

    2016-07-01

    This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.

  19. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  20. A matrix contraction process

    NASA Astrophysics Data System (ADS)

    Wilkinson, Michael; Grant, John

    2018-03-01

    We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \

  1. Common Randomness Principles of Secrecy

    ERIC Educational Resources Information Center

    Tyagi, Himanshu

    2013-01-01

    This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…

  2. Power-law Exponent in Multiplicative Langevin Equation with Temporally Correlated Noise

    NASA Astrophysics Data System (ADS)

    Morita, Satoru

    2018-05-01

    Power-law distributions are ubiquitous in nature. Random multiplicative processes are a basic model for the generation of power-law distributions. For discrete-time systems, the power-law exponent is known to decrease as the autocorrelation time of the multiplier increases. However, for continuous-time systems, it is not yet clear how the temporal correlation affects the power-law behavior. Herein, we analytically investigated a multiplicative Langevin equation with colored noise. We show that the power-law exponent depends on the details of the multiplicative noise, in contrast to the case of discrete-time systems.

  3. Laser absorption of carbon fiber reinforced polymer with randomly distributed carbon fibers

    NASA Astrophysics Data System (ADS)

    Hu, Jun; Xu, Hebing; Li, Chao

    2018-03-01

    Laser processing of carbon fiber reinforced polymer (CFRP) is a non-traditional machining method which has many prospective applications. The laser absorption characteristics of CFRP are analyzed in this paper. A ray tracing model describing the interaction of the laser spot with CFRP is established. The material model contains randomly distributed carbon fibers which are generated using an improved carbon fiber placement method. It was found that CFRP has good laser absorption due to multiple reflections of the light rays in the material’s microstructure. The randomly distributed carbon fibers make the absorptivity of the light rays change randomly in the laser spot. Meanwhile, the average absorptivity fluctuation is obvious during movement of the laser. The experimental measurements agree well with the values predicted by the ray tracing model.

  4. Effects of random tooth profile errors on the dynamic behaviors of planetary gears

    NASA Astrophysics Data System (ADS)

    Xun, Chao; Long, Xinhua; Hua, Hongxing

    2018-02-01

    In this paper, a nonlinear random model is built to describe the dynamics of planetary gear trains (PGTs), in which the time-varying mesh stiffness, tooth profile modification (TPM), tooth contact loss, and random tooth profile error are considered. A stochastic method based on the method of multiple scales (MMS) is extended to analyze the statistical property of the dynamic performance of PGTs. By the proposed multiple-scales based stochastic method, the distributions of the dynamic transmission errors (DTEs) are investigated, and the lower and upper bounds are determined based on the 3σ principle. Monte Carlo method is employed to verify the proposed method. Results indicate that the proposed method can be used to determine the distribution of the DTE of PGTs high efficiently and allow a link between the manufacturing precision and the dynamical response. In addition, the effects of tooth profile modification on the distributions of vibration amplitudes and the probability of tooth contact loss with different manufacturing tooth profile errors are studied. The results show that the manufacturing precision affects the distribution of dynamic transmission errors dramatically and appropriate TPMs are helpful to decrease the nominal value and the deviation of the vibration amplitudes.

  5. Convex hulls of random walks in higher dimensions: A large-deviation study

    NASA Astrophysics Data System (ADS)

    Schawe, Hendrik; Hartmann, Alexander K.; Majumdar, Satya N.

    2017-12-01

    The distribution of the hypervolume V and surface ∂ V of convex hulls of (multiple) random walks in higher dimensions are determined numerically, especially containing probabilities far smaller than P =10-1000 to estimate large deviation properties. For arbitrary dimensions and large walk lengths T , we suggest a scaling behavior of the distribution with the length of the walk T similar to the two-dimensional case and behavior of the distributions in the tails. We underpin both with numerical data in d =3 and d =4 dimensions. Further, we confirm the analytically known means of those distributions and calculate their variances for large T .

  6. Parallelization of a spatial random field characterization process using the Method of Anchored Distributions and the HTCondor high throughput computing system

    NASA Astrophysics Data System (ADS)

    Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)

  7. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  8. Exact Solution of the Markov Propagator for the Voter Model on the Complete Graph

    DTIC Science & Technology

    2014-07-01

    distribution of the random walk. This process can also be applied to other models, incomplete graphs, or to multiple dimensions. An advantage of this...since any multiple of an eigenvector remains an eigenvector. Without any loss, let bk = 1. Now we can ascertain the explicit solution for bj when k < j...this bound is valid for all initial probability distributions. However, without detailed information about the eigenvectors, we cannot extract more

  9. Multiple Scattering in Random Mechanical Systems and Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun

    2013-10-01

    This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.

  10. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  11. Relaxation dynamics of maximally clustered networks

    NASA Astrophysics Data System (ADS)

    Klaise, Janis; Johnson, Samuel

    2018-01-01

    We study the relaxation dynamics of fully clustered networks (maximal number of triangles) to an unclustered state under two different edge dynamics—the double-edge swap, corresponding to degree-preserving randomization of the configuration model, and single edge replacement, corresponding to full randomization of the Erdős-Rényi random graph. We derive expressions for the time evolution of the degree distribution, edge multiplicity distribution and clustering coefficient. We show that under both dynamics networks undergo a continuous phase transition in which a giant connected component is formed. We calculate the position of the phase transition analytically using the Erdős-Rényi phenomenology.

  12. Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times

    NASA Astrophysics Data System (ADS)

    Fa, Kwok Sau

    2012-09-01

    In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.

  13. Generalized optimal design for two-arm, randomized phase II clinical trials with endpoints from the exponential dispersion family.

    PubMed

    Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S

    2016-11-01

    For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    PubMed

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  15. Ignition probability of polymer-bonded explosives accounting for multiple sources of material stochasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu

    2014-05-07

    Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less

  16. Scattering in discrete random media with implications to propagation through rain. Ph.D. Thesis George Washingtion Univ., Washington, D.C.

    NASA Technical Reports Server (NTRS)

    Ippolito, L. J., Jr.

    1977-01-01

    The multiple scattering effects on wave propagation through a volume of discrete scatterers were investigated. The mean field and intensity for a distribution of scatterers was developed using a discrete random media formulation, and second order series expansions for the mean field and total intensity derived for one-dimensional and three-dimensional configurations. The volume distribution results were shown to proceed directly from the one-dimensional results. The multiple scattering intensity expansion was compared to the classical single scattering intensity and the classical result was found to represent only the first three terms in the total intensity expansion. The Foldy approximation to the mean field was applied to develop the coherent intensity, and was found to exactly represent all coherent terms of the total intensity.

  17. Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heasler, Patrick G.

    This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.

  18. Modeling species’ realized climatic niche space and predicting their response to global warming for several western forest species with small geographic distributions

    Treesearch

    Marcus V. Warwell; Gerald E. Rehfeldt; Nicholas L. Crookston

    2010-01-01

    The Random Forests multiple regression tree was used to develop an empirically based bioclimatic model of the presence-absence of species occupying small geographic distributions in western North America. The species assessed were subalpine larch (Larix lyallii), smooth Arizona cypress (Cupressus arizonica ssp. glabra...

  19. Propagation of elastic wave in nanoporous material with distributed cylindrical nanoholes

    NASA Astrophysics Data System (ADS)

    Qiang, FangWei; Wei, PeiJun; Liu, XiQiang

    2013-08-01

    The effective propagation constants of plane longitudinal and shear waves in nanoporous material with random distributed parallel cylindrical nanoholes are studied. The surface elastic theory is used to consider the surface stress effects and to derive the nontraditional boundary condition on the surface of nanoholes. The plane wave expansion method is used to obtain the scattering waves from the single nanohole. The multiple scattering effects are taken into consideration by summing the scattered waves from all scatterers and performing the configuration averaging of random distributed scatterers. The effective propagation constants of coherent waves along with the associated dynamic effective elastic modulus are numerically evaluated. The influences of surface stress are discussed based on the numerical results.

  20. Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys

    NASA Astrophysics Data System (ADS)

    Han, Chao; Shen, Yuzhen; Ma, Wenlin

    2017-12-01

    An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.

  1. A novel strategy for load balancing of distributed medical applications.

    PubMed

    Logeswaran, Rajasvaran; Chen, Li-Choo

    2012-04-01

    Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.

  2. The Effects of Cognitive Jamming on Wireless Sensor Networks Used for Geolocation

    DTIC Science & Technology

    2012-03-01

    continuously sends out random bits to the channel without following any MAC-layer etiquette [31]. Normally, the underlying MAC protocol allows...23 UDP User Datagram Protocol . . . . . . . . . . . . . . . . . . . 30 MIMO Multiple Input Multiple Output . . . . . . . . . . . . . . . 70...information is packaged and distributed on the network layer, only the physical measurements are considered. This protocol is used to detect faulty nodes

  3. Efficient search of multiple types of targets

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-12-01

    Random searches often take place in fragmented landscapes. Also, in many instances like animal foraging, significant benefits to the searcher arise from visits to a large diversity of patches with a well-balanced distribution of targets found. Up to date, such aspects have been widely ignored in the usual single-objective analysis of search efficiency, in which one seeks to maximize just the number of targets found per distance traversed. Here we address the problem of determining the best strategies for the random search when these multiple-objective factors play a key role in the process. We consider a figure of merit (efficiency function), which properly "scores" the mentioned tasks. By considering random walk searchers with a power-law asymptotic Lévy distribution of step lengths, p (ℓ ) ˜ℓ-μ , with 1 <μ ≤3 , we show that the standard optimal strategy with μopt≈2 no longer holds universally. Instead, optimal searches with enhanced superdiffusivity emerge, including values as low as μopt≈1.3 (i.e., tending to the ballistic limit). For the general theory of random search optimization, our findings emphasize the necessity to correctly characterize the multitude of aims in any concrete metric to compare among possible candidates to efficient strategies. In the context of animal foraging, our results might explain some empirical data pointing to stronger superdiffusion (μ <2 ) in the search behavior of different animal species, conceivably associated to multiple goals to be achieved in fragmented landscapes.

  4. Score distributions of gapped multiple sequence alignments down to the low-probability tail

    NASA Astrophysics Data System (ADS)

    Fieth, Pascal; Hartmann, Alexander K.

    2016-08-01

    Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.

  5. Bayesian bivariate meta-analysis of correlated effects: Impact of the prior distributions on the between-study correlation, borrowing of strength, and joint inferences

    PubMed Central

    Bujkiewicz, Sylwia; Riley, Richard D

    2016-01-01

    Multivariate random-effects meta-analysis allows the joint synthesis of correlated results from multiple studies, for example, for multiple outcomes or multiple treatment groups. In a Bayesian univariate meta-analysis of one endpoint, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis, there is little guidance about the choice of prior distributions for the variances or, crucially, the between-study correlation, ρB; for the latter, researchers often use a Uniform(−1,1) distribution assuming it is vague. In this paper, an extensive simulation study and a real illustrative example is used to examine the impact of various (realistically) vague prior distributions for ρB and the between-study variances within a Bayesian bivariate random-effects meta-analysis of two correlated treatment effects. A range of diverse scenarios are considered, including complete and missing data, to examine the impact of the prior distributions on posterior results (for treatment effect and between-study correlation), amount of borrowing of strength, and joint predictive distributions of treatment effectiveness in new studies. Two key recommendations are identified to improve the robustness of multivariate meta-analysis results. First, the routine use of a Uniform(−1,1) prior distribution for ρB should be avoided, if possible, as it is not necessarily vague. Instead, researchers should identify a sensible prior distribution, for example, by restricting values to be positive or negative as indicated by prior knowledge. Second, it remains critical to use sensible (e.g. empirically based) prior distributions for the between-study variances, as an inappropriate choice can adversely impact the posterior distribution for ρB, which may then adversely affect inferences such as joint predictive probabilities. These recommendations are especially important with a small number of studies and missing data. PMID:26988929

  6. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    PubMed

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.

  7. On Edge Exchangeable Random Graphs

    NASA Astrophysics Data System (ADS)

    Janson, Svante

    2017-06-01

    We study a recent model for edge exchangeable random graphs introduced by Crane and Dempsey; in particular we study asymptotic properties of the random simple graph obtained by merging multiple edges. We study a number of examples, and show that the model can produce dense, sparse and extremely sparse random graphs. One example yields a power-law degree distribution. We give some examples where the random graph is dense and converges a.s. in the sense of graph limit theory, but also an example where a.s. every graph limit is the limit of some subsequence. Another example is sparse and yields convergence to a non-integrable generalized graphon defined on (0,∞).

  8. Multiwavelength ytterbium-Brillouin random Rayleigh feedback fiber laser

    NASA Astrophysics Data System (ADS)

    Wu, Han; Wang, Zinan; Fan, Mengqiu; Li, Jiaqi; Meng, Qingyang; Xu, Dangpeng; Rao, Yunjiang

    2018-03-01

    In this letter, we experimentally demonstrate the multiwavelength ytterbium-Brillouin random fiber laser for the first time, in the half-open cavity formed by a fiber loop mirror and randomly distributed Rayleigh mirrors. With a cladding-pumped ytterbium-doped fiber and a long TrueWave fiber, the narrow linewidth Brillouin pump can generate multiple Brillouin Stokes lines with hybrid ytterbium-Brillouin gain. Up to six stable channels with a spacing of about 0.06 nm are obtained. This work extends the operation wavelength of the multiwavelength Brillouin random fiber laser to the 1 µm band, and has potential in various applications.

  9. The Pearson walk with shrinking steps in two dimensions

    NASA Astrophysics Data System (ADS)

    Serino, C. A.; Redner, S.

    2010-01-01

    We study the shrinking Pearson random walk in two dimensions and greater, in which the direction of the Nth step is random and its length equals λN-1, with λ<1. As λ increases past a critical value λc, the endpoint distribution in two dimensions, P(r), changes from having a global maximum away from the origin to being peaked at the origin. The probability distribution for a single coordinate, P(x), undergoes a similar transition, but exhibits multiple maxima on a fine length scale for λ close to λc. We numerically determine P(r) and P(x) by applying a known algorithm that accurately inverts the exact Bessel function product form of the Fourier transform for the probability distributions.

  10. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  11. Ant-inspired density estimation via random walks.

    PubMed

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  12. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    PubMed

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  13. Distributed reservation control protocols for random access broadcasting channels

    NASA Technical Reports Server (NTRS)

    Greene, E. P.; Ephremides, A.

    1981-01-01

    Attention is given to a communication network consisting of an arbitrary number of nodes which can communicate with each other via a time-division multiple access (TDMA) broadcast channel. The reported investigation is concerned with the development of efficient distributed multiple access protocols for traffic consisting primarily of single packet messages in a datagram mode of operation. The motivation for the design of the protocols came from the consideration of efficient multiple access utilization of moderate to high bandwidth (4-40 Mbit/s capacity) communication satellite channels used for the transmission of short (1000-10,000 bits) fixed length packets. Under these circumstances, the ratio of roundtrip propagation time to packet transmission time is between 100 to 10,000. It is shown how a TDMA channel can be adaptively shared by datagram traffic and constant bandwidth users such as in digital voice applications. The distributed reservation control protocols described are a hybrid between contention and reservation protocols.

  14. Distinguishability of generic quantum states

    NASA Astrophysics Data System (ADS)

    Puchała, Zbigniew; Pawela, Łukasz; Życzkowski, Karol

    2016-06-01

    Properties of random mixed states of dimension N distributed uniformly with respect to the Hilbert-Schmidt measure are investigated. We show that for large N , due to the concentration of measure, the trace distance between two random states tends to a fixed number D ˜=1 /4 +1 /π , which yields the Helstrom bound on their distinguishability. To arrive at this result, we apply free random calculus and derive the symmetrized Marchenko-Pastur distribution, which is shown to describe numerical data for the model of coupled quantum kicked tops. Asymptotic value for the root fidelity between two random states, √{F }=3/4 , can serve as a universal reference value for further theoretical and experimental studies. Analogous results for quantum relative entropy and Chernoff quantity provide other bounds on the distinguishablity of both states in a multiple measurement setup due to the quantum Sanov theorem. We study also mean entropy of coherence of random pure and mixed states and entanglement of a generic mixed state of a bipartite system.

  15. An improved sampling method of complex network

    NASA Astrophysics Data System (ADS)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  16. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  17. Effect of Rayleigh-scattering distributed feedback on multiwavelength Raman fiber laser generation.

    PubMed

    El-Taher, A E; Harper, P; Babin, S A; Churkin, D V; Podivilov, E V; Ania-Castanon, J D; Turitsyn, S K

    2011-01-15

    We experimentally demonstrate a Raman fiber laser based on multiple point-action fiber Bragg grating reflectors and distributed feedback via Rayleigh scattering in an ~22-km-long optical fiber. Twenty-two lasing lines with spacing of ~100 GHz (close to International Telecommunication Union grid) in the C band are generated at the watt level. In contrast to the normal cavity with competition between laser lines, the random distributed feedback cavity exhibits highly stable multiwavelength generation with a power-equalized uniform distribution, which is almost independent on power.

  18. Visualizing Time-Varying Distribution Data in EOS Application

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei

    2004-01-01

    In this research, we have developed several novel visualization methods for spatial probability density function data. Our focus has been on 2D spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We developed novel clustering algorithms as a means to reduce the information contained in these datasets; and investigated different ways of interpreting and clustering the data.

  19. Kanerva's sparse distributed memory with multiple hamming thresholds

    NASA Technical Reports Server (NTRS)

    Pohja, Seppo; Kaski, Kimmo

    1992-01-01

    If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.

  20. Universal behavior of the interoccurrence times between losses in financial markets: independence of the time resolution.

    PubMed

    Ludescher, Josef; Bunde, Armin

    2014-12-01

    We consider representative financial records (stocks and indices) on time scales between one minute and one day, as well as historical monthly data sets, and show that the distribution P(Q)(r) of the interoccurrence times r between losses below a negative threshold -Q, for fixed mean interoccurrence times R(Q) in multiples of the corresponding time resolutions, can be described on all time scales by the same q exponentials, P(Q)(r)∝1/{[1+(q-1)βr](1/(q-1))}. We propose that the asset- and time-scale-independent analytic form of P(Q)(r) can be regarded as an additional stylized fact of the financial markets and represents a nontrivial test for market models. We analyze the distribution P(Q)(r) as well as the autocorrelation C(Q)(s) of the interoccurrence times for three market models: (i) multiplicative random cascades, (ii) multifractal random walks, and (iii) the generalized autoregressive conditional heteroskedasticity [GARCH(1,1)] model. We find that only one of the considered models, the multifractal random walk model, approximately reproduces the q-exponential form of P(Q)(r) and the power-law decay of C(Q)(s).

  1. Universal behavior of the interoccurrence times between losses in financial markets: Independence of the time resolution

    NASA Astrophysics Data System (ADS)

    Ludescher, Josef; Bunde, Armin

    2014-12-01

    We consider representative financial records (stocks and indices) on time scales between one minute and one day, as well as historical monthly data sets, and show that the distribution PQ(r ) of the interoccurrence times r between losses below a negative threshold -Q , for fixed mean interoccurrence times RQ in multiples of the corresponding time resolutions, can be described on all time scales by the same q exponentials, PQ(r ) ∝1 /{[1+(q -1 ) β r ] 1 /(q -1 )} . We propose that the asset- and time-scale-independent analytic form of PQ(r ) can be regarded as an additional stylized fact of the financial markets and represents a nontrivial test for market models. We analyze the distribution PQ(r ) as well as the autocorrelation CQ(s ) of the interoccurrence times for three market models: (i) multiplicative random cascades, (ii) multifractal random walks, and (iii) the generalized autoregressive conditional heteroskedasticity [GARCH(1,1)] model. We find that only one of the considered models, the multifractal random walk model, approximately reproduces the q -exponential form of PQ(r ) and the power-law decay of CQ(s ) .

  2. Multiple transmitter performance with appropriate amplitude modulation for free-space optical communication.

    PubMed

    Tellez, Jason A; Schmidt, Jason D

    2011-08-20

    The propagation of a free-space optical communications signal through atmospheric turbulence experiences random fluctuations in intensity, including signal fades, which negatively impact the performance of the communications link. The gamma-gamma probability density function is commonly used to model the scintillation of a single beam. One proposed method to reduce the occurrence of scintillation-induced fades at the receiver plane involves the use of multiple beams propagating through independent paths, resulting in a sum of independent gamma-gamma random variables. Recently an analytical model for the probability distribution of irradiance from the sum of multiple independent beams was developed. Because truly independent beams are practically impossible to create, we present here a more general but approximate model for the distribution of beams traveling through partially correlated paths. This model compares favorably with wave-optics simulations and highlights the reduced scintillation as the number of transmitted beams is increased. Additionally, a pulse-position modulation scheme is used to reduce the impact of signal fades when they occur. Analytical and simulated results showed significantly improved performance when compared to fixed threshold on/off keying. © 2011 Optical Society of America

  3. Ant-inspired density estimation via random walks

    PubMed Central

    Musco, Cameron; Su, Hsin-Hao

    2017-01-01

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146

  4. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  5. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  6. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System.

    PubMed

    Duan, Litian; Wang, Zizhong John; Duan, Fu

    2016-11-16

    In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.

  7. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System

    PubMed Central

    Duan, Litian; Wang, Zizhong John; Duan, Fu

    2016-01-01

    In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range. PMID:27854342

  8. Answer (in part) blowing in the wind. Comment on "Liberating Lévy walk research from the shackles of optimal foraging" by A. Reynolds

    NASA Astrophysics Data System (ADS)

    Cheng, Ken

    2015-09-01

    In a perspective in this issue based on thorough review, Andy Reynolds [1] tackles the issue of how the by now ubiquitously found Lévy walks can be generated, by animals, by organisms other than animals, and other forms of life below the level of organisms, such as cells. The answer comes not in a single whole cloth, but rather in a patchwork of generating factors. Lévy-like movements arise in objects blowing in the wind, or from travelers encountering turbulence in the seas or being repelled by boundaries. A variety of desiderata in movements, not related to achieving optimal foraging, may also engender Lévy-like movements. These include avoiding other organisms or not crossing one's traveled path. Adding to that plethora are ways in which variations on the theme of garden-variety random walks can at least approach a Lévy walk, if not capturing the mathematical form perfectly. Such variations include executing random walks on multiple scales, a strategy exhibited by desert ants [2,3], mussels [4], and quite likely extant hunter-gatherer humans as well [5]. It is possible that fossil tracks over 50 million years old also show this strategy, as the curve fitting with multiple random walks, characterized by multiple exponential distributions, is as good or better than curve fits having the power-law distribution characteristic of Lévy walks [6]. Another variation is to have a random walk search whose scale is expanding over time. In great detail and based on extensive literature - the review has over 200 references - a range of other ways in which Lévy-like movements might come about are also discussed.

  9. Prescription-induced jump distributions in multiplicative Poisson processes.

    PubMed

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.

  10. Prescription-induced jump distributions in multiplicative Poisson processes

    NASA Astrophysics Data System (ADS)

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.

  11. Distributed multiple path routing in complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Wang, San-Xiu; Wu, Ling-Wei; Mei, Pan; Yang, Xu-Hua; Wen, Guang-Hui

    2016-12-01

    Routing in complex transmission networks is an important problem that has garnered extensive research interest in the recent years. In this paper, we propose a novel routing strategy called the distributed multiple path (DMP) routing strategy. For each of the O-D node pairs in a given network, the DMP routing strategy computes and stores multiple short-length paths that overlap less with each other in advance. And during the transmission stage, it rapidly selects an actual routing path which provides low transmission cost from the pre-computed paths for each transmission task, according to the real-time network transmission status information. Computer simulation results obtained for the lattice, ER random, and scale-free networks indicate that the strategy can significantly improve the anti-congestion ability of transmission networks, as well as provide favorable routing robustness against partial network failures.

  12. Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang

    2016-01-01

    The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.

  13. Does the central limit theorem always apply to phase noise? Some implications for radar problems

    NASA Astrophysics Data System (ADS)

    Gray, John E.; Addison, Stephen R.

    2017-05-01

    The phase noise problem or Rayleigh problem occurs in all aspects of radar. It is an effect that a radar engineer or physicist always has to take into account as part of a design or in attempt to characterize the physics of a problem such as reverberation. Normally, the mathematical difficulties of phase noise characterization are avoided by assuming the phase noise probability distribution function (PDF) is uniformly distributed, and the Central Limit Theorem (CLT) is invoked to argue that the superposition of relatively few random components obey the CLT and hence the superposition can be treated as a normal distribution. By formalizing the characterization of phase noise (see Gray and Alouani) for an individual random variable, the summation of identically distributed random variables is the product of multiple characteristic functions (CF). The product of the CFs for phase noise has a CF that can be analyzed to understand the limitations CLT when applied to phase noise. We mirror Kolmogorov's original proof as discussed in Papoulis to show the CLT can break down for receivers that gather limited amounts of data as well as the circumstances under which it can fail for certain phase noise distributions. We then discuss the consequences of this for matched filter design as well the implications for some physics problems.

  14. Shock Interaction with Random Spherical Particle Beds

    NASA Astrophysics Data System (ADS)

    Neal, Chris; Mehta, Yash; Salari, Kambiz; Jackson, Thomas L.; Balachandar, S. "Bala"; Thakur, Siddharth

    2016-11-01

    In this talk we present results on fully resolved simulations of shock interaction with randomly distributed bed of particles. Multiple simulations were carried out by varying the number of particles to isolate the effect of volume fraction. Major focus of these simulations was to understand 1) the effect of the shockwave and volume fraction on the forces experienced by the particles, 2) the effect of particles on the shock wave, and 3) fluid mediated particle-particle interactions. Peak drag force for particles at different volume fractions show a downward trend as the depth of the bed increased. This can be attributed to dissipation of energy as the shockwave travels through the bed of particles. One of the fascinating observations from these simulations was the fluctuations in different quantities due to presence of multiple particles and their random distribution. These are large simulations with hundreds of particles resulting in large amount of data. We present statistical analysis of the data and make relevant observations. Average pressure in the computational domain is computed to characterize the strengths of the reflected and transmitted waves. We also present flow field contour plots to support our observations. U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  15. Pharmacokinetics and Safety of Intravenous Murepavadin Infusion in Healthy Adult Subjects Administered Single and Multiple Ascending Doses.

    PubMed

    Wach, Achim; Dembowsky, Klaus; Dale, Glenn E

    2018-04-01

    Murepavadin is the first in class of the outer membrane protein-targeting antibiotics (OMPTA) and a pathogen-specific peptidomimetic antibacterial with a novel, nonlytic mechanism of action targeting Pseudomonas aeruginosa Murepavadin is being developed for the treatment of hospital-acquired bacterial pneumonia (HABP) and ventilator-associated bacterial pneumonia (VABP). The pharmacokinetics (PK) and safety of single and multiple doses of murepavadin were investigated in healthy male subjects. Part A of the study was a double-blind, randomized, placebo-controlled, single-ascending-dose investigation in 10 sequential cohorts where each cohort comprised 6 healthy male subjects; 4 subjects were randomized to murepavadin, and 2 subjects were randomized to placebo. Part B was a double-blind, randomized, placebo-controlled, multiple-ascending-dose investigation in 3 sequential cohorts. After a single dose of murepavadin, the geometric mean half-life (2.52 to 5.30 h), the total clearance (80.1 to 114 ml/h/kg), and the volume of distribution (415 to 724 ml/kg) were consistent across dose levels. The pharmacokinetics of the dosing regimens evaluated were dose proportional and linear. Murepavadin was well tolerated, adverse events were transient and generally mild, and no dose-limiting toxicity was identified. Copyright © 2018 American Society for Microbiology.

  16. Random walk in nonhomogeneous environments: A possible approach to human and animal mobility

    NASA Astrophysics Data System (ADS)

    Srokowski, Tomasz

    2017-03-01

    The random walk process in a nonhomogeneous medium, characterized by a Lévy stable distribution of jump length, is discussed. The width depends on a position: either before the jump or after that. In the latter case, the density slope is affected by the variable width and the variance may be finite; then all kinds of the anomalous diffusion are predicted. In the former case, only the time characteristics are sensitive to the variable width. The corresponding Langevin equation with different interpretations of the multiplicative noise is discussed. The dependence of the distribution width on position after jump is interpreted in terms of cognitive abilities and related to such problems as migration in a human population and foraging habits of animals.

  17. A generalization of random matrix theory and its application to statistical physics.

    PubMed

    Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H

    2017-02-01

    To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.

  18. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  19. Assessing Community College Student Knowledge in the Liberal Arts.

    ERIC Educational Resources Information Center

    Cohen, Arthur M.; Schuetz, Pam; Chang, June C.; Plecha, Michelle D.

    This paper describes an assessment of community college student knowledge in the liberal arts at two-year colleges in Southern California. A survey instrument with multiple choice questions covering five liberal arts subject areas was distributed to 4,200 students in randomly selected classes at ten colleges. More than 2,500 questionnaires were…

  20. Matching with Multiple Control Groups with Adjustment for Group Differences

    ERIC Educational Resources Information Center

    Stuart, Elizabeth A.; Rubin, Donald B.

    2008-01-01

    When estimating causal effects from observational data, it is desirable to approximate a randomized experiment as closely as possible. This goal can often be achieved by choosing a subsample from the original control group that matches the treatment group on the distribution of the observed covariates. However, sometimes the original control group…

  1. Simulation using computer-piloted point excitations of vibrations induced on a structure by an acoustic environment

    NASA Astrophysics Data System (ADS)

    Monteil, P.

    1981-11-01

    Computation of the overall levels and spectral densities of the responses measured on a launcher skin, the fairing for instance, merged into a random acoustic environment during take off, was studied. The analysis of transmission of these vibrations to the payload required the simulation of these responses by a shaker control system, using a small number of distributed shakers. Results show that this closed loop computerized digital system allows the acquisition of auto and cross spectral densities equal to those of the responses previously computed. However, wider application is sought, e.g., road and runway profiles. The problems of multiple input-output system identification, multiple true random signal generation, and real time programming are evoked. The system should allow for the control of four shakers.

  2. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    PubMed

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  3. Quantification of the heterogeneity of prognostic cellular biomarkers in ewing sarcoma using automated image and random survival forest analysis.

    PubMed

    Bühnemann, Claudia; Li, Simon; Yu, Haiyue; Branford White, Harriet; Schäfer, Karl L; Llombart-Bosch, Antonio; Machado, Isidro; Picci, Piero; Hogendoorn, Pancras C W; Athanasou, Nicholas A; Noble, J Alison; Hassan, A Bassim

    2014-01-01

    Driven by genomic somatic variation, tumour tissues are typically heterogeneous, yet unbiased quantitative methods are rarely used to analyse heterogeneity at the protein level. Motivated by this problem, we developed automated image segmentation of images of multiple biomarkers in Ewing sarcoma to generate distributions of biomarkers between and within tumour cells. We further integrate high dimensional data with patient clinical outcomes utilising random survival forest (RSF) machine learning. Using material from cohorts of genetically diagnosed Ewing sarcoma with EWSR1 chromosomal translocations, confocal images of tissue microarrays were segmented with level sets and watershed algorithms. Each cell nucleus and cytoplasm were identified in relation to DAPI and CD99, respectively, and protein biomarkers (e.g. Ki67, pS6, Foxo3a, EGR1, MAPK) localised relative to nuclear and cytoplasmic regions of each cell in order to generate image feature distributions. The image distribution features were analysed with RSF in relation to known overall patient survival from three separate cohorts (185 informative cases). Variation in pre-analytical processing resulted in elimination of a high number of non-informative images that had poor DAPI localisation or biomarker preservation (67 cases, 36%). The distribution of image features for biomarkers in the remaining high quality material (118 cases, 104 features per case) were analysed by RSF with feature selection, and performance assessed using internal cross-validation, rather than a separate validation cohort. A prognostic classifier for Ewing sarcoma with low cross-validation error rates (0.36) was comprised of multiple features, including the Ki67 proliferative marker and a sub-population of cells with low cytoplasmic/nuclear ratio of CD99. Through elimination of bias, the evaluation of high-dimensionality biomarker distribution within cell populations of a tumour using random forest analysis in quality controlled tumour material could be achieved. Such an automated and integrated methodology has potential application in the identification of prognostic classifiers based on tumour cell heterogeneity.

  4. Time interval between successive trading in foreign currency market: from microscopic to macroscopic

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2004-12-01

    Recently, it has been shown that inter-transaction interval (ITI) distribution of foreign currency rates has a fat tail. In order to understand the statistical property of the ITI dealer model with N interactive agents is proposed. From numerical simulations it is confirmed that the ITI distribution of the dealer model has a power law tail. The random multiplicative process (RMP) can be approximately derived from the ITI of the dealer model. Consequently, we conclude that the power law tail of the ITI distribution of the dealer model is a result of the RMP.

  5. Distributed computation: the new wave of synthetic biology devices.

    PubMed

    Macía, Javier; Posas, Francesc; Solé, Ricard V

    2012-06-01

    Synthetic biology (SB) offers a unique opportunity for designing complex molecular circuits able to perform predefined functions. But the goal of achieving a flexible toolbox of reusable molecular components has been shown to be limited due to circuit unpredictability, incompatible parts or random fluctuations. Many of these problems arise from the challenges posed by engineering the molecular circuitry: multiple wires are usually difficult to implement reliably within one cell and the resulting systems cannot be reused in other modules. These problems are solved by means of a nonstandard approach to single cell devices, using cell consortia and allowing the output signal to be distributed among different cell types, which can be combined in multiple, reusable and scalable ways. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Multi-hazard Assessment and Scenario Toolbox (MhAST): A Framework for Analyzing Compounding Effects of Multiple Hazards

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Moftakhari, H.; AghaKouchak, A.

    2017-12-01

    Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.

  7. Texture and anisotropy in ferroelectric lead metaniobate

    NASA Astrophysics Data System (ADS)

    Iverson, Benjamin John

    Ferroelectric lead metaniobate, PbNb2O6, is a piezoelectric ceramic typically used because of its elevated Curie temperature and anisotropic properties. However, the piezoelectric constant, d33, is relatively low in randomly oriented ceramics when compared to other ferroelectrics. Crystallographic texturing is often employed to increase the piezoelectric constant because the spontaneous polarization axes of grains are better aligned. In this research, crystallographic textures induced through tape casting are distinguished from textures induced through electrical poling. Texture is described using multiple quantitative approaches utilizing X-ray and neutron time-of-flight diffraction. Tape casting lead metaniobate with an inclusion of acicular template particles induces an orthotropic texture distribution. Templated grain growth from seed particles oriented during casting results in anisotropic grain structures. The degree of preferred orientation is directly linked to the shear behavior of the tape cast slurry. Increases in template concentration, slurry viscosity, and casting velocity lead to larger textures by inducing more particle orientation in the tape casting plane. The maximum 010 texture distributions were two and a half multiples of a random distribution. Ferroelectric texture was induced by electrical poling. Electric poling increases the volume of material oriented with the spontaneous polarization direction in the material. Samples with an initial paraelectric texture exhibit a greater change in the domain volume fraction during electrical poling than randomly oriented ceramics. In tape cast samples, the resulting piezoelectric response is proportional to the 010 texture present prior to poling. This results in property anisotropy dependent on initial texture. Piezoelectric properties measured on the most textured ceramics were similar to those obtained with a commercial standard.

  8. Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

    NASA Astrophysics Data System (ADS)

    Bruno, Marcelo G. S.; Dias, Stiven S.

    2014-12-01

    We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

  9. Effects of unstratified and centre-stratified randomization in multi-centre clinical trials.

    PubMed

    Anisimov, Vladimir V

    2011-01-01

    This paper deals with the analysis of randomization effects in multi-centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre-stratified block-permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson-gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed-form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre-stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.

  10. Quantitative analysis of random ameboid motion

    NASA Astrophysics Data System (ADS)

    Bödeker, H. U.; Beta, C.; Frank, T. D.; Bodenschatz, E.

    2010-04-01

    We quantify random migration of the social ameba Dictyostelium discoideum. We demonstrate that the statistics of cell motion can be described by an underlying Langevin-type stochastic differential equation. An analytic expression for the velocity distribution function is derived. The separation into deterministic and stochastic parts of the movement shows that the cells undergo a damped motion with multiplicative noise. Both contributions to the dynamics display a distinct response to external physiological stimuli. The deterministic component depends on the developmental state and ambient levels of signaling substances, while the stochastic part does not.

  11. High-density fiber-optic DNA random microsphere array.

    PubMed

    Ferguson, J A; Steemers, F J; Walt, D R

    2000-11-15

    A high-density fiber-optic DNA microarray sensor was developed to monitor multiple DNA sequences in parallel. Microarrays were prepared by randomly distributing DNA probe-functionalized 3.1-microm-diameter microspheres in an array of wells etched in a 500-microm-diameter optical imaging fiber. Registration of the microspheres was performed using an optical encoding scheme and a custom-built imaging system. Hybridization was visualized using fluorescent-labeled DNA targets with a detection limit of 10 fM. Hybridization times of seconds are required for nanomolar target concentrations, and analysis is performed in minutes.

  12. Application of Multiple Imputation for Missing Values in Three-Way Three-Mode Multi-Environment Trial Data

    PubMed Central

    Tian, Ting; McLachlan, Geoffrey J.; Dieters, Mark J.; Basford, Kaye E.

    2015-01-01

    It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances. PMID:26689369

  13. Application of Multiple Imputation for Missing Values in Three-Way Three-Mode Multi-Environment Trial Data.

    PubMed

    Tian, Ting; McLachlan, Geoffrey J; Dieters, Mark J; Basford, Kaye E

    2015-01-01

    It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.

  14. Stochastic description of geometric phase for polarized waves in random media

    NASA Astrophysics Data System (ADS)

    Boulanger, Jérémie; Le Bihan, Nicolas; Rossetto, Vincent

    2013-01-01

    We present a stochastic description of multiple scattering of polarized waves in the regime of forward scattering. In this regime, if the source is polarized, polarization survives along a few transport mean free paths, making it possible to measure an outgoing polarization distribution. We consider thin scattering media illuminated by a polarized source and compute the probability distribution function of the polarization on the exit surface. We solve the direct problem using compound Poisson processes on the rotation group SO(3) and non-commutative harmonic analysis. We obtain an exact expression for the polarization distribution which generalizes previous works and design an algorithm solving the inverse problem of estimating the scattering properties of the medium from the measured polarization distribution. This technique applies to thin disordered layers, spatially fluctuating media and multiple scattering systems and is based on the polarization but not on the signal amplitude. We suggest that it can be used as a non-invasive testing method.

  15. Closed-form solution for the Wigner phase-space distribution function for diffuse reflection and small-angle scattering in a random medium.

    PubMed

    Yura, H T; Thrane, L; Andersen, P E

    2000-12-01

    Within the paraxial approximation, a closed-form solution for the Wigner phase-space distribution function is derived for diffuse reflection and small-angle scattering in a random medium. This solution is based on the extended Huygens-Fresnel principle for the optical field, which is widely used in studies of wave propagation through random media. The results are general in that they apply to both an arbitrary small-angle volume scattering function, and arbitrary (real) ABCD optical systems. Furthermore, they are valid in both the single- and multiple-scattering regimes. Some general features of the Wigner phase-space distribution function are discussed, and analytic results are obtained for various types of scattering functions in the asymptotic limit s > 1, where s is the optical depth. In particular, explicit results are presented for optical coherence tomography (OCT) systems. On this basis, a novel way of creating OCT images based on measurements of the momentum width of the Wigner phase-space distribution is suggested, and the advantage over conventional OCT images is discussed. Because all previous published studies regarding the Wigner function are carried out in the transmission geometry, it is important to note that the extended Huygens-Fresnel principle and the ABCD matrix formalism may be used successfully to describe this geometry (within the paraxial approximation). Therefore for completeness we present in an appendix the general closed-form solution for the Wigner phase-space distribution function in ABCD paraxial optical systems for direct propagation through random media, and in a second appendix absorption effects are included.

  16. Hurdle models for multilevel zero-inflated data via h-likelihood.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2010-12-30

    Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.

  17. Moderation analysis with missing data in the predictors.

    PubMed

    Zhang, Qian; Wang, Lijuan

    2017-12-01

    The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Modeling fluid diffusion in cerebral white matter with random walks in complex environments

    NASA Astrophysics Data System (ADS)

    Levy, Amichai; Cwilich, Gabriel; Buldyrev, Sergey V.; Weeden, Van J.

    2012-02-01

    Recent studies with diffusion MRI have shown new aspects of geometric order in the brain, including complex path coherence within the cerebral cortex, and organization of cerebral white matter and connectivity across multiple scales. The main assumption of these studies is that water molecules diffuse along myelin sheaths of neuron axons in the white matter and thus the anisotropy of their diffusion tensor observed by MRI can provide information about the direction of the axons connecting different parts of the brain. We model the diffusion of particles confined in the space of between the bundles of cylindrical obstacles representing fibrous structures of various orientations. We have investigated the directional properties of the diffusion, by studying the angular distribution of the end point of the random walks as a function of their length, to understand the scale over which the distribution randomizes. We will show evidence of qualitative change in the behavior of the diffusion for different volume fractions of obstacles. Comparisons with three-dimensional MRI images will be illustrated.

  19. Performance of Distributed CFAR Processors in Pearson Distributed Clutter

    NASA Astrophysics Data System (ADS)

    Messali, Zoubeida; Soltani, Faouzi

    2006-12-01

    This paper deals with the distributed constant false alarm rate (CFAR) radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA), order statistics (OS), and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S) random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO) and the smallest of (SO) CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the "OR" fusion rule.

  20. Explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2004-04-01

    Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.

  1. Explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process.

    PubMed

    Sato, Aki-Hiro

    2004-04-01

    Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.

  2. Missing Data and Multiple Imputation: An Unbiased Approach

    NASA Technical Reports Server (NTRS)

    Foy, M.; VanBaalen, M.; Wear, M.; Mendez, C.; Mason, S.; Meyers, V.; Alexander, D.; Law, J.

    2014-01-01

    The default method of dealing with missing data in statistical analyses is to only use the complete observations (complete case analysis), which can lead to unexpected bias when data do not meet the assumption of missing completely at random (MCAR). For the assumption of MCAR to be met, missingness cannot be related to either the observed or unobserved variables. A less stringent assumption, missing at random (MAR), requires that missingness not be associated with the value of the missing variable itself, but can be associated with the other observed variables. When data are truly MAR as opposed to MCAR, the default complete case analysis method can lead to biased results. There are statistical options available to adjust for data that are MAR, including multiple imputation (MI) which is consistent and efficient at estimating effects. Multiple imputation uses informing variables to determine statistical distributions for each piece of missing data. Then multiple datasets are created by randomly drawing on the distributions for each piece of missing data. Since MI is efficient, only a limited number, usually less than 20, of imputed datasets are required to get stable estimates. Each imputed dataset is analyzed using standard statistical techniques, and then results are combined to get overall estimates of effect. A simulation study will be demonstrated to show the results of using the default complete case analysis, and MI in a linear regression of MCAR and MAR simulated data. Further, MI was successfully applied to the association study of CO2 levels and headaches when initial analysis showed there may be an underlying association between missing CO2 levels and reported headaches. Through MI, we were able to show that there is a strong association between average CO2 levels and the risk of headaches. Each unit increase in CO2 (mmHg) resulted in a doubling in the odds of reported headaches.

  3. Stationary properties of maximum-entropy random walks.

    PubMed

    Dixit, Purushottam D

    2015-10-01

    Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.

  4. A descriptive model of resting-state networks using Markov chains.

    PubMed

    Xie, H; Pal, R; Mitra, S

    2016-08-01

    Resting-state functional connectivity (RSFC) studies considering pairwise linear correlations have attracted great interests while the underlying functional network structure still remains poorly understood. To further our understanding of RSFC, this paper presents an analysis of the resting-state networks (RSNs) based on the steady-state distributions and provides a novel angle to investigate the RSFC of multiple functional nodes. This paper evaluates the consistency of two networks based on the Hellinger distance between the steady-state distributions of the inferred Markov chain models. The results show that generated steady-state distributions of default mode network have higher consistency across subjects than random nodes from various RSNs.

  5. Suppression of thermal frequency noise in erbium-doped fiber random lasers.

    PubMed

    Saxena, Bhavaye; Bao, Xiaoyi; Chen, Liang

    2014-02-15

    Frequency and intensity noise are characterized for erbium-doped fiber (EDF) random lasers based on Rayleigh distributed feedback mechanism. We propose a theoretical model for the frequency noise of such random lasers using the property of random phase modulations from multiple scattering points in ultralong fibers. We find that the Rayleigh feedback suppresses the noise at higher frequencies by introducing a Lorentzian envelope over the thermal frequency noise of a long fiber cavity. The theoretical model and measured frequency noise agree quantitatively with two fitting parameters. The random laser exhibits a noise level of 6  Hz²/Hz at 2 kHz, which is lower than what is found in conventional narrow-linewidth EDF fiber lasers and nonplanar ring laser oscillators (NPROs) by a factor of 166 and 2, respectively. The frequency noise has a minimum value for an optimum length of the Rayleigh scattering fiber.

  6. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty.

    PubMed

    Lash, Timothy L

    2007-11-26

    The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is likely to lead to overconfidence regarding the potential for causal associations, whereas the former safeguards against such overinterpretations. Furthermore, such analyses, once programmed, allow rapid implementation of alternative assignments of probability distributions to the bias parameters, so elevate the plane of discussion regarding study bias from characterizing studies as "valid" or "invalid" to a critical and quantitative discussion of sources of uncertainty.

  7. Quantum cryptography and applications in the optical fiber network

    NASA Astrophysics Data System (ADS)

    Luo, Yuhui

    2005-09-01

    Quantum cryptography, as part of quantum information and communications, can provide absolute security for information transmission because it is established on the fundamental laws of quantum theory, such as the principle of uncertainty, No-cloning theorem and quantum entanglement. In this thesis research, a novel scheme to implement quantum key distribution based on multiphoton entanglement with a new protocol is proposed. Its advantages are: a larger information capacity can be obtained with a longer transmission distance and the detection of multiple photons is easier than that of a single photon. The security and attacks pertaining to such a system are also studied. Next, a quantum key distribution over wavelength division multiplexed (WDM) optical fiber networks is realized. Quantum key distribution in networks is a long-standing problem for practical applications. Here we combine quantum cryptography and WDM to solve this problem because WDM technology is universally deployed in the current and next generation fiber networks. The ultimate target is to deploy quantum key distribution over commercial networks. The problems arising from the networks are also studied in this part. Then quantum key distribution in multi-access networks using wavelength routing technology is investigated in this research. For the first time, quantum cryptography for multiple individually targeted users has been successfully implemented in sharp contrast to that using the indiscriminating broadcasting structure. It overcomes the shortcoming that every user in the network can acquire the quantum key signals intended to be exchanged between only two users. Furthermore, a more efficient scheme of quantum key distribution is adopted, hence resulting in a higher key rate. Lastly, a quantum random number generator based on quantum optics has been experimentally demonstrated. This device is a key component for quantum key distribution as it can create truly random numbers, which is an essential requirement to perform quantum key distribution. This new generator is composed of a single optical fiber coupler with fiber pigtails, which can be easily used in optical fiber communications.

  8. S-wave attenuation structure beneath the northern Izu-Bonin arc

    NASA Astrophysics Data System (ADS)

    Takahashi, Tsutomu; Obana, Koichiro; Kodaira, Shuichi

    2016-04-01

    To understand temperature structure or magma distribution in the crust and uppermost mantle, it is essential to know their attenuation structure. This study estimated the 3-D S-wave attenuation structure in the crust and uppermost mantle at the northern Izu-Bonin arc, taking into account the apparent attenuation due to multiple forward scattering. In the uppermost mantle, two areas of high seismic attenuation (high Q -1) imaged beneath the volcanic front were mostly colocated with low-velocity anomalies. This coincidence suggests that these high- Q -1 areas in low-velocity zones are the most likely candidates for high-temperature regions beneath volcanoes. The distribution of random inhomogeneities indicated the presence of three anomalies beneath the volcanic front: Two were in high- Q -1 areas but the third was in a moderate- Q -1 area, indicating a low correlation between random inhomogeneities and Q -1. All three anomalies of random inhomogeneities were rich in short-wavelength spectra. The most probable interpretation of such spectra is the presence of volcanic rock, which would be related to accumulated magma intrusion during episodes of volcanic activity. Therefore, the different distributions of Q -1 and random inhomogeneities imply that the positions of hot regions in the uppermost mantle beneath this arc have changed temporally; therefore, they may provide important constraints on the evolutionary processes of arc crust and volcanoes.

  9. Power and instrument strength requirements for Mendelian randomization studies using multiple genetic variants.

    PubMed

    Pierce, Brandon L; Ahsan, Habibul; Vanderweele, Tyler J

    2011-06-01

    Mendelian Randomization (MR) studies assess the causality of an exposure-disease association using genetic determinants [i.e. instrumental variables (IVs)] of the exposure. Power and IV strength requirements for MR studies using multiple genetic variants have not been explored. We simulated cohort data sets consisting of a normally distributed disease trait, a normally distributed exposure, which affects this trait and a biallelic genetic variant that affects the exposure. We estimated power to detect an effect of exposure on disease for varying allele frequencies, effect sizes and samples sizes (using two-stage least squares regression on 10,000 data sets-Stage 1 is a regression of exposure on the variant. Stage 2 is a regression of disease on the fitted exposure). Similar analyses were conducted using multiple genetic variants (5, 10, 20) as independent or combined IVs. We assessed IV strength using the first-stage F statistic. Simulations of realistic scenarios indicate that MR studies will require large (n > 1000), often very large (n > 10,000), sample sizes. In many cases, so-called 'weak IV' problems arise when using multiple variants as independent IVs (even with as few as five), resulting in biased effect estimates. Combining genetic factors into fewer IVs results in modest power decreases, but alleviates weak IV problems. Ideal methods for combining genetic factors depend upon knowledge of the genetic architecture underlying the exposure. The feasibility of well-powered, unbiased MR studies will depend upon the amount of variance in the exposure that can be explained by known genetic factors and the 'strength' of the IV set derived from these genetic factors.

  10. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-03-08

    Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  11. Log-gamma linear-mixed effects models for multiple outcomes with application to a longitudinal glaucoma study

    PubMed Central

    Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.

    2015-01-01

    Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565

  12. Multiple-predators-based capture process on complex networks

    NASA Astrophysics Data System (ADS)

    Ramiz Sharafat, Rajput; Pu, Cunlai; Li, Jie; Chen, Rongbin; Xu, Zhongqi

    2017-03-01

    The predator/prey (capture) problem is a prototype of many network-related applications. We study the capture process on complex networks by considering multiple predators from multiple sources. In our model, some lions start from multiple sources simultaneously to capture the lamb by biased random walks, which are controlled with a free parameter $\\alpha$. We derive the distribution of the lamb's lifetime and the expected lifetime $\\left\\langle T\\right\\rangle $. Through simulation, we find that the expected lifetime drops substantially with the increasing number of lions. We also study how the underlying topological structure affects the capture process, and obtain that locating on small-degree nodes is better than large-degree nodes to prolong the lifetime of the lamb. Moreover, dense or homogeneous network structures are against the survival of the lamb.

  13. Chromatin Folding, Fragile Sites, and Chromosome Aberrations Induced by Low- and High- LET Radiation

    NASA Technical Reports Server (NTRS)

    Zhang, Ye; Cox, Bradley; Asaithamby, Aroumougame; Chen, David J.; Wu, Honglu

    2013-01-01

    We previously demonstrated non-random distributions of breaks involved in chromosome aberrations induced by low- and high-LET radiation. To investigate the factors contributing to the break point distribution in radiation-induced chromosome aberrations, human epithelial cells were fixed in G1 phase. Interphase chromosomes were hybridized with a multicolor banding in situ hybridization (mBAND) probe for chromosome 3 which distinguishes six regions of the chromosome in separate colors. After the images were captured with a laser scanning confocal microscope, the 3-dimensional structure of interphase chromosome 3 was reconstructed at multimega base pair scale. Specific locations of the chromosome, in interphase, were also analyzed with bacterial artificial chromosome (BAC) probes. Both mBAND and BAC studies revealed non-random folding of chromatin in interphase, and suggested association of interphase chromatin folding to the radiation-induced chromosome aberration hotspots. We further investigated the distribution of genes, as well as the distribution of breaks found in tumor cells. Comparisons of these distributions to the radiation hotspots showed that some of the radiation hotspots coincide with the frequent breaks found in solid tumors and with the fragile sites for other environmental toxins. Our results suggest that multiple factors, including the chromatin structure and the gene distribution, can contribute to radiation-induced chromosome aberrations.

  14. Global synchronization of memristive neural networks subject to random disturbances via distributed pinning control.

    PubMed

    Guo, Zhenyuan; Yang, Shaofu; Wang, Jun

    2016-12-01

    This paper presents theoretical results on global exponential synchronization of multiple memristive neural networks in the presence of external noise by means of two types of distributed pinning control. The multiple memristive neural networks are coupled in a general structure via a nonlinear function, which consists of a linear diffusive term and a discontinuous sign term. A pinning impulsive control law is introduced in the coupled system to synchronize all neural networks. Sufficient conditions are derived for ascertaining global exponential synchronization in mean square. In addition, a pinning adaptive control law is developed to achieve global exponential synchronization in mean square. Both pinning control laws utilize only partial state information received from the neighborhood of the controlled neural network. Simulation results are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Comparing multiple competing interventions in the absence of randomized trials using clinical risk-benefit analysis

    PubMed Central

    2012-01-01

    Background To demonstrate the use of risk-benefit analysis for comparing multiple competing interventions in the absence of randomized trials, we applied this approach to the evaluation of five anticoagulants to prevent thrombosis in patients undergoing orthopedic surgery. Methods Using a cost-effectiveness approach from a clinical perspective (i.e. risk benefit analysis) we compared thromboprophylaxis with warfarin, low molecular weight heparin, unfractionated heparin, fondaparinux or ximelagatran in patients undergoing major orthopedic surgery, with sub-analyses according to surgery type. Proportions and variances of events defining risk (major bleeding) and benefit (thrombosis averted) were obtained through a meta-analysis and used to define beta distributions. Monte Carlo simulations were conducted and used to calculate incremental risks, benefits, and risk-benefit ratios. Finally, net clinical benefit was calculated for all replications across a range of risk-benefit acceptability thresholds, with a reference range obtained by estimating the case fatality rate - ratio of thrombosis to bleeding. Results The analysis showed that compared to placebo ximelagatran was superior to other options but final results were influenced by type of surgery, since ximelagatran was superior in total knee replacement but not in total hip replacement. Conclusions Using simulation and economic techniques we demonstrate a method that allows comparing multiple competing interventions in the absence of randomized trials with multiple arms by determining the option with the best risk-benefit profile. It can be helpful in clinical decision making since it incorporates risk, benefit, and personal risk acceptance. PMID:22233221

  16. A probabilistic fatigue analysis of multiple site damage

    NASA Technical Reports Server (NTRS)

    Rohrbaugh, S. M.; Ruff, D.; Hillberry, B. M.; Mccabe, G.; Grandt, A. F., Jr.

    1994-01-01

    The variability in initial crack size and fatigue crack growth is incorporated in a probabilistic model that is used to predict the fatigue lives for unstiffened aluminum alloy panels containing multiple site damage (MSD). The uncertainty of the damage in the MSD panel is represented by a distribution of fatigue crack lengths that are analytically derived from equivalent initial flaw sizes. The variability in fatigue crack growth rate is characterized by stochastic descriptions of crack growth parameters for a modified Paris crack growth law. A Monte-Carlo simulation explicitly describes the MSD panel by randomly selecting values from the stochastic variables and then grows the MSD cracks with a deterministic fatigue model until the panel fails. Different simulations investigate the influences of the fatigue variability on the distributions of remaining fatigue lives. Six cases that consider fixed and variable conditions of initial crack size and fatigue crack growth rate are examined. The crack size distribution exhibited a dominant effect on the remaining fatigue life distribution, and the variable crack growth rate exhibited a lesser effect on the distribution. In addition, the probabilistic model predicted that only a small percentage of the life remains after a lead crack develops in the MSD panel.

  17. Modeling contemporary climate profiles of whitebark pine (Pinus albicaulis) and predicting responses to global warming

    Treesearch

    Marcus V. Warwell; Gerald E. Rehfeldt; Nicholas L. Crookston

    2006-01-01

    The Random Forests multiple regression tree was used to develop an empirically-based bioclimate model for the distribution of Pinus albicaulis (whitebark pine) in western North America, latitudes 31° to 51° N and longitudes 102° to 125° W. Independent variables included 35 simple expressions of temperature and precipitation and their interactions....

  18. Self-organisation of random oscillators with Lévy stable distributions

    NASA Astrophysics Data System (ADS)

    Moradi, Sara; Anderson, Johan

    2017-08-01

    A novel possibility of self-organized behaviour of stochastically driven oscillators is presented. It is shown that synchronization by Lévy stable processes is significantly more efficient than that by oscillators with Gaussian statistics. The impact of outlier events from the tail of the distribution function was examined by artificially introducing a few additional oscillators with very strong coupling strengths and it is found that remarkably even one such rare and extreme event may govern the long term behaviour of the coupled system. In addition to the multiplicative noise component, we have investigated the impact of an external additive Lévy distributed noise component on the synchronisation properties of the oscillators.

  19. On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model

    NASA Astrophysics Data System (ADS)

    Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin

    2018-01-01

    We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.

  20. Random walk-percolation-based modeling of two-phase flow in porous media: Breakthrough time and net to gross ratio estimation

    NASA Astrophysics Data System (ADS)

    Ganjeh-Ghazvini, Mostafa; Masihi, Mohsen; Ghaedi, Mojtaba

    2014-07-01

    Fluid flow modeling in porous media has many applications in waste treatment, hydrology and petroleum engineering. In any geological model, flow behavior is controlled by multiple properties. These properties must be known in advance of common flow simulations. When uncertainties are present, deterministic modeling often produces poor results. Percolation and Random Walk (RW) methods have recently been used in flow modeling. Their stochastic basis is useful in dealing with uncertainty problems. They are also useful in finding the relationship between porous media descriptions and flow behavior. This paper employs a simple methodology based on random walk and percolation techniques. The method is applied to a well-defined model reservoir in which the breakthrough time distributions are estimated. The results of this method and the conventional simulation are then compared. The effect of the net to gross ratio on the breakthrough time distribution is studied in terms of Shannon entropy. Use of the entropy plot allows one to assign the appropriate net to gross ratio to any porous medium.

  1. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  2. Deducing the multi-trader population driving a financial market

    NASA Astrophysics Data System (ADS)

    Gupta, Nachi; Hauser, Raphael; Johnson, Neil

    2005-12-01

    We have previously laid out a basic framework for predicting financial movements and pockets of predictability by tracking the distribution of a multi-trader population playing on an artificial financial market model. This work explores extensions to this basic framework. We allow for more intelligent agents with a richer strategy set, and we no longer constrain the distribution over these agents to a probability space. We then introduce a fusion scheme which accounts for multiple runs of randomly chosen sets of possible agent types. We also discuss a mechanism for bias removal on the estimates.

  3. Random-access algorithms for multiuser computer communication networks. Doctoral thesis, 1 September 1986-31 August 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papantoni-Kazakos, P.; Paterakis, M.

    1988-07-01

    For many communication applications with time constraints (e.g., transmission of packetized voice messages), a critical performance measure is the percentage of messages transmitted within a given amount of time after their generation at the transmitting station. This report presents a random-access algorithm (RAA) suitable for time-constrained applications. Performance analysis demonstrates that significant message-delay improvement is attained at the expense of minimal traffic loss. Also considered is the case of noisy channels. The noise effect appears at erroneously observed channel feedback. Error sensitivity analysis shows that the proposed random-access algorithm is insensitive to feedback channel errors. Window Random-Access Algorithms (RAAs) aremore » considered next. These algorithms constitute an important subclass of Multiple-Access Algorithms (MAAs); they are distributive, and they attain high throughput and low delays by controlling the number of simultaneously transmitting users.« less

  4. Modeling and Simulation of High Dimensional Stochastic Multiscale PDE Systems at the Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevrekidis, Ioannis

    2017-03-22

    The thrust of the proposal was to exploit modern data-mining tools in a way that will create a systematic, computer-assisted approach to the representation of random media -- and also to the representation of the solutions of an array of important physicochemical processes that take place in/on such media. A parsimonious representation/parametrization of the random media links directly (via uncertainty quantification tools) to good sampling of the distribution of random media realizations. It also links directly to modern multiscale computational algorithms (like the equation-free approach that has been developed in our group) and plays a crucial role in accelerating themore » scientific computation of solutions of nonlinear PDE models (deterministic or stochastic) in such media – both solutions in particular realizations of the random media, and estimation of the statistics of the solutions over multiple realizations (e.g. expectations).« less

  5. Diversity of multilayer networks and its impact on collaborating epidemics

    NASA Astrophysics Data System (ADS)

    Min, Yong; Hu, Jiaren; Wang, Weihong; Ge, Ying; Chang, Jie; Jin, Xiaogang

    2014-12-01

    Interacting epidemics on diverse multilayer networks are increasingly important in modeling and analyzing the diffusion processes of real complex systems. A viral agent spreading on one layer of a multilayer network can interact with its counterparts by promoting (cooperative interaction), suppressing (competitive interaction), or inducing (collaborating interaction) its diffusion on other layers. Collaborating interaction displays different patterns: (i) random collaboration, where intralayer or interlayer induction has the same probability; (ii) concentrating collaboration, where consecutive intralayer induction is guaranteed with a probability of 1; and (iii) cascading collaboration, where consecutive intralayer induction is banned with a probability of 0. In this paper, we develop a top-bottom framework that uses only two distributions, the overlaid degree distribution and edge-type distribution, to model collaborating epidemics on multilayer networks. We then state the response of three collaborating patterns to structural diversity (evenness and difference of network layers). For viral agents with small transmissibility, we find that random collaboration is more effective in networks with higher diversity (high evenness and difference), while the concentrating pattern is more suitable in uneven networks. Interestingly, the cascading pattern requires a network with moderate difference and high evenness, and the moderately uneven coupling of multiple network layers can effectively increase robustness to resist cascading failure. With large transmissibility, however, we find that all collaborating patterns are more effective in high-diversity networks. Our work provides a systemic analysis of collaborating epidemics on multilayer networks. The results enhance our understanding of biotic and informative diffusion through multiple vectors.

  6. Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Salimi, S.; Jafarizadeh, M. A.

    2009-06-01

    In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.

  7. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    PubMed

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  8. Universal principles governing multiple random searchers on complex networks: The logarithmic growth pattern and the harmonic law

    NASA Astrophysics Data System (ADS)

    Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan

    2018-03-01

    We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.

  9. Confidence intervals for a difference between lognormal means in cluster randomization trials.

    PubMed

    Poirier, Julia; Zou, G Y; Koval, John

    2017-04-01

    Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.

  10. Spectrum of walk matrix for Koch network and its application

    NASA Astrophysics Data System (ADS)

    Xie, Pinchen; Lin, Yuan; Zhang, Zhongzhi

    2015-06-01

    Various structural and dynamical properties of a network are encoded in the eigenvalues of walk matrix describing random walks on the network. In this paper, we study the spectra of walk matrix of the Koch network, which displays the prominent scale-free and small-world features. Utilizing the particular architecture of the network, we obtain all the eigenvalues and their corresponding multiplicities. Based on the link between the eigenvalues of walk matrix and random target access time defined as the expected time for a walker going from an arbitrary node to another one selected randomly according to the steady-state distribution, we then derive an explicit solution to the random target access time for random walks on the Koch network. Finally, we corroborate our computation for the eigenvalues by enumerating spanning trees in the Koch network, using the connection governing eigenvalues and spanning trees, where a spanning tree of a network is a subgraph of the network, that is, a tree containing all the nodes.

  11. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  12. Record statistics of a strongly correlated time series: random walks and Lévy flights

    NASA Astrophysics Data System (ADS)

    Godrèche, Claude; Majumdar, Satya N.; Schehr, Grégory

    2017-08-01

    We review recent advances on the record statistics of strongly correlated time series, whose entries denote the positions of a random walk or a Lévy flight on a line. After a brief survey of the theory of records for independent and identically distributed random variables, we focus on random walks. During the last few years, it was indeed realized that random walks are a very useful ‘laboratory’ to test the effects of correlations on the record statistics. We start with the simple one-dimensional random walk with symmetric jumps (both continuous and discrete) and discuss in detail the statistics of the number of records, as well as of the ages of the records, i.e. the lapses of time between two successive record breaking events. Then we review the results that were obtained for a wide variety of random walk models, including random walks with a linear drift, continuous time random walks, constrained random walks (like the random walk bridge) and the case of multiple independent random walkers. Finally, we discuss further observables related to records, like the record increments, as well as some questions raised by physical applications of record statistics, like the effects of measurement error and noise.

  13. Should multiple imputation be the method of choice for handling missing data in randomized trials?

    PubMed Central

    Sullivan, Thomas R; White, Ian R; Salter, Amy B; Ryan, Philip; Lee, Katherine J

    2016-01-01

    The use of multiple imputation has increased markedly in recent years, and journal reviewers may expect to see multiple imputation used to handle missing data. However in randomized trials, where treatment group is always observed and independent of baseline covariates, other approaches may be preferable. Using data simulation we evaluated multiple imputation, performed both overall and separately by randomized group, across a range of commonly encountered scenarios. We considered both missing outcome and missing baseline data, with missing outcome data induced under missing at random mechanisms. Provided the analysis model was correctly specified, multiple imputation produced unbiased treatment effect estimates, but alternative unbiased approaches were often more efficient. When the analysis model overlooked an interaction effect involving randomized group, multiple imputation produced biased estimates of the average treatment effect when applied to missing outcome data, unless imputation was performed separately by randomized group. Based on these results, we conclude that multiple imputation should not be seen as the only acceptable way to handle missing data in randomized trials. In settings where multiple imputation is adopted, we recommend that imputation is carried out separately by randomized group. PMID:28034175

  14. Should multiple imputation be the method of choice for handling missing data in randomized trials?

    PubMed

    Sullivan, Thomas R; White, Ian R; Salter, Amy B; Ryan, Philip; Lee, Katherine J

    2016-01-01

    The use of multiple imputation has increased markedly in recent years, and journal reviewers may expect to see multiple imputation used to handle missing data. However in randomized trials, where treatment group is always observed and independent of baseline covariates, other approaches may be preferable. Using data simulation we evaluated multiple imputation, performed both overall and separately by randomized group, across a range of commonly encountered scenarios. We considered both missing outcome and missing baseline data, with missing outcome data induced under missing at random mechanisms. Provided the analysis model was correctly specified, multiple imputation produced unbiased treatment effect estimates, but alternative unbiased approaches were often more efficient. When the analysis model overlooked an interaction effect involving randomized group, multiple imputation produced biased estimates of the average treatment effect when applied to missing outcome data, unless imputation was performed separately by randomized group. Based on these results, we conclude that multiple imputation should not be seen as the only acceptable way to handle missing data in randomized trials. In settings where multiple imputation is adopted, we recommend that imputation is carried out separately by randomized group.

  15. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    PubMed

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  16. SER Analysis of MPPM-Coded MIMO-FSO System over Uncorrelated and Correlated Gamma-Gamma Atmospheric Turbulence Channels

    NASA Astrophysics Data System (ADS)

    Khallaf, Haitham S.; Garrido-Balsells, José M.; Shalaby, Hossam M. H.; Sampei, Seiichi

    2015-12-01

    The performance of multiple-input multiple-output free space optical (MIMO-FSO) communication systems, that adopt multipulse pulse position modulation (MPPM) techniques, is analyzed. Both exact and approximate symbol-error rates (SERs) are derived for both cases of uncorrelated and correlated channels. The effects of background noise, receiver shot-noise, and atmospheric turbulence are taken into consideration in our analysis. The random fluctuations of the received optical irradiance, produced by the atmospheric turbulence, is modeled by the widely used gamma-gamma statistical distribution. Uncorrelated MIMO channels are modeled by the α-μ distribution. A closed-form expression for the probability density function of the optical received irradiance is derived for the case of correlated MIMO channels. Using our analytical expressions, the degradation of the system performance with the increment of the correlation coefficients between MIMO channels is corroborated.

  17. The effect of multiple encounters on short period comet orbits

    NASA Technical Reports Server (NTRS)

    Lowrey, B. E.

    1972-01-01

    The observed orbital elements of short period comets are found to be consistent with the hypothesis of derivation from long period comets as long as two assumptions are made. First, the distribution of short period comets has been randomized by multiple encounters with Jupiter and second, the short period comets have lower velocities of encounter with Jupiter than is generally expected. Some 16% of the observed short period comets have lower encounter velocities than is allowed mathematically using Laplace's method. This may be due to double encounter processes with Jupiter and Saturn, or as a result of prolonged encounters. The distribution of unobservable short period comets can be inferred in part from the observed comets. Many have orbits between Jupiter and Saturn with somewhat higher inclinations than those with perihelions near the earth. Debris from those comets may form the major component of the zodiacal dust.

  18. Coverage dependent molecular assembly of anthraquinone on Au(111)

    NASA Astrophysics Data System (ADS)

    DeLoach, Andrew S.; Conrad, Brad R.; Einstein, T. L.; Dougherty, Daniel B.

    2017-11-01

    A scanning tunneling microscopy study of anthraquinone (AQ) on the Au(111) surface shows that the molecules self-assemble into several structures depending on the local surface coverage. At high coverages, a close-packed saturated monolayer is observed, while at low coverages, mobile surface molecules coexist with stable chiral hexamer clusters. At intermediate coverages, a disordered 2D porous network interlinking close-packed islands is observed in contrast to the giant honeycomb networks observed for the same molecule on Cu(111). This difference verifies the predicted extreme sensitivity [J. Wyrick et al., Nano Lett. 11, 2944 (2011)] of the pore network to small changes in the surface electronic structure. Quantitative analysis of the 2D pore network reveals that the areas of the vacancy islands are distributed log-normally. Log-normal distributions are typically associated with the product of random variables (multiplicative noise), and we propose that the distribution of pore sizes for AQ on Au(111) originates from random linear rate constants for molecules to either desorb from the surface or detach from the region of a nucleated pore.

  19. Coverage dependent molecular assembly of anthraquinone on Au(111).

    PubMed

    DeLoach, Andrew S; Conrad, Brad R; Einstein, T L; Dougherty, Daniel B

    2017-11-14

    A scanning tunneling microscopy study of anthraquinone (AQ) on the Au(111) surface shows that the molecules self-assemble into several structures depending on the local surface coverage. At high coverages, a close-packed saturated monolayer is observed, while at low coverages, mobile surface molecules coexist with stable chiral hexamer clusters. At intermediate coverages, a disordered 2D porous network interlinking close-packed islands is observed in contrast to the giant honeycomb networks observed for the same molecule on Cu(111). This difference verifies the predicted extreme sensitivity [J. Wyrick et al., Nano Lett. 11, 2944 (2011)] of the pore network to small changes in the surface electronic structure. Quantitative analysis of the 2D pore network reveals that the areas of the vacancy islands are distributed log-normally. Log-normal distributions are typically associated with the product of random variables (multiplicative noise), and we propose that the distribution of pore sizes for AQ on Au(111) originates from random linear rate constants for molecules to either desorb from the surface or detach from the region of a nucleated pore.

  20. Simulating propagation of coherent light in random media using the Fredholm type integral equation

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2017-06-01

    Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.

  1. Rating the raters in a mixed model: An approach to deciphering the rater reliability

    NASA Astrophysics Data System (ADS)

    Shang, Junfeng; Wang, Yougui

    2013-05-01

    Rating the raters has attracted extensive attention in recent years. Ratings are quite complex in that the subjective assessment and a number of criteria are involved in a rating system. Whenever the human judgment is a part of ratings, the inconsistency of ratings is the source of variance in scores, and it is therefore quite natural for people to verify the trustworthiness of ratings. Accordingly, estimation of the rater reliability will be of great interest and an appealing issue. To facilitate the evaluation of the rater reliability in a rating system, we propose a mixed model where the scores of the ratees offered by a rater are described with the fixed effects determined by the ability of the ratees and the random effects produced by the disagreement of the raters. In such a mixed model, for the rater random effects, we derive its posterior distribution for the prediction of random effects. To quantitatively make a decision in revealing the unreliable raters, the predictive influence function (PIF) serves as a criterion which compares the posterior distributions of random effects between the full data and rater-deleted data sets. The benchmark for this criterion is also discussed. This proposed methodology of deciphering the rater reliability is investigated in the multiple simulated and two real data sets.

  2. Freestyle multiple propeller flap reconstruction (jigsaw puzzle approach) for complicated back defects.

    PubMed

    Park, Sung Woo; Oh, Tae Suk; Eom, Jin Sup; Sun, Yoon Chi; Suh, Hyun Suk; Hong, Joon Pio

    2015-05-01

    The reconstruction of the posterior trunk remains to be a challenge as defects can be extensive, with deep dead space, and fixation devices exposed. Our goal was to achieve a tension-free closure for complex defects on the posterior trunk. From August 2006 to May 2013, 18 cases were reconstructed with multiple flaps combining perforator(s) and local skin flaps. The reconstructions were performed using freestyle approach. Starting with propeller flap(s) in single or multilobed design and sequentially in conjunction with adjacent random pattern flaps such as fitting puzzle. All defects achieved tensionless primary closure. The final appearance resembled a jigsaw puzzle-like appearance. The average size of defect was 139.6 cm(2) (range, 36-345 cm(2)). A total of 26 perforator flaps were used in addition to 19 random pattern flaps for 18 cases. In all cases, a single perforator was used for each propeller flap. The defect and the donor site all achieved tension-free closure. The reconstruction was 100% successful without flap loss. One case of late infection was noted at 12 months after surgery. Using multiple lobe designed propeller flaps in conjunction with random pattern flaps in a freestyle approach, resembling putting a jigsaw puzzle together, we can achieve a tension-free closure by distributing the tension to multiple flaps, supplying sufficient volume to obliterate dead space, and have reliable vascularity as the flaps do not need to be oversized. This can be a viable approach to reconstruct extensive defects on the posterior trunk. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  3. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  4. Nonlinear Relaxation in Population Dynamics

    NASA Astrophysics Data System (ADS)

    Cirone, Markus A.; de Pasquale, Ferdinando; Spagnolo, Bernardo

    We analyze the nonlinear relaxation of a complex ecosystem composed of many interacting species. The ecological system is described by generalized Lotka-Volterra equations with a multiplicative noise. The transient dynamics is studied in the framework of the mean field theory and with random interaction between the species. We focus on the statistical properties of the asymptotic behaviour of the time integral of the ith population and on the distribution of the population and of the local field.

  5. A model for the flux-r.m.s. correlation in blazar variability or the minijets-in-a-jet statistical model

    NASA Astrophysics Data System (ADS)

    Biteau, J.; Giebels, B.

    2012-12-01

    Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.

  6. Performance Analysis of Direct-Sequence Code-Division Multiple-Access Communications with Asymmetric Quadrature Phase-Shift-Keying Modulation

    NASA Technical Reports Server (NTRS)

    Wang, C.-W.; Stark, W.

    2005-01-01

    This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.

  7. The effective propagation constants of SH wave in composites reinforced by dispersive parallel nanofibers

    NASA Astrophysics Data System (ADS)

    Qiang, FangWei; Wei, PeiJun; Li, Li

    2012-07-01

    In the present paper, the effective propagation constants of elastic SH waves in composites with randomly distributed parallel cylindrical nanofibers are studied. The surface stress effects are considered based on the surface elasticity theory and non-classical interfacial conditions between the nanofiber and the host are derived. The scattering waves from individual nanofibers embedded in an infinite elastic host are obtained by the plane wave expansion method. The scattering waves from all fibers are summed up to obtain the multiple scattering waves. The interactions among random dispersive nanofibers are taken into account by the effective field approximation. The effective propagation constants are obtained by the configurational average of the multiple scattering waves. The effective speed and attenuation of the averaged wave and the associated dynamical effective shear modulus of composites are numerically calculated. Based on the numerical results, the size effects of the nanofibers on the effective propagation constants and the effective modulus are discussed.

  8. Quantitative evaluation of local pulmonary distribution of TiO2 in rats following single or multiple intratracheal administrations of TiO2 nanoparticles using X-ray fluorescence microscopy.

    PubMed

    Zhang, Guihua; Shinohara, Naohide; Kano, Hirokazu; Senoh, Hideki; Suzuki, Masaaki; Sasaki, Takeshi; Fukushima, Shoji; Gamo, Masashi

    2016-10-01

    Uneven pulmonary nanoparticle (NP) distribution has been described when using single-dose intratracheal administration tests. Multiple-dose intratracheal administrations with small quantities of NPs are expected to improve the unevenness of each dose. The differences in local pulmonary NP distribution (called microdistribution) between single- and multiple-dose administrations may cause differential pulmonary responses; however, this has not been evaluated. Here, we quantitatively evaluated the pulmonary microdistribution (per mesh: 100 μm × 100 μm) of TiO2 in lung sections from rats following one, two, three, or four doses of TiO2 NPs at a same total dosage of 10 mg kg(-1) using X-ray fluorescence microscopy. The results indicate that: (i) multiple-dose administrations show lower variations in TiO2 content (ng mesh(-1) ) for sections of each lobe; (ii) TiO2 appears to be deposited more in the right caudal and accessory lobes located downstream of the administration direction of NP suspensions, and less so in the right middle lobes, irrespective of the number of doses; (iii) there are not prominent differences in the pattern of pulmonary TiO2 microdistribution between rats following single and multiple doses of TiO2 NPs. Additionally, the estimation of pulmonary TiO2 deposition for multiple-dose administrations imply that every dose of TiO2 would be randomly deposited only in part of the fixed 30-50% of lung areas. The evidence suggests that multiple-dose administrations do not offer remarkable advantages over single-dose administration on the pulmonary NP microdistribution, although multiple-dose administrations may reduce variations in the TiO2 content for each lung lobe. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Gravitational Wakes Sizes from Multiple Cassini Radio Occultations of Saturn's Rings

    NASA Astrophysics Data System (ADS)

    Marouf, E. A.; Wong, K. K.; French, R. G.; Rappaport, N. J.; McGhee, C. A.; Anabtawi, A.

    2016-12-01

    Voyager and Cassini radio occultation extinction and forward scattering observations of Saturn's C-Ring and Cassini Division imply power law particle size distributions extending from few millimeters to several meters with power law index in the 2.8 to 3.2 range, depending on the specific ring feature. We extend size determination to the elongated and canted particle clusters (gravitational wakes) known to permeate Saturn's A- and B-Rings. We use multiple Cassini radio occultation observations over a range of ring opening angle B and wake viewing angle α to constrain the mean wake width W and thickness/height H, and average ring area coverage fraction. The rings are modeled as randomly blocked diffraction screen in the plane normal to the incidence direction. Collective particle shadows define the blocked area. The screen's transmittance is binary: blocked or unblocked. Wakes are modeled as thin layer of elliptical cylinders populated by random but uniformly distributed spherical particles. The cylinders can be immersed in a "classical" layer of spatially uniformly distributed particles. Numerical simulations of model diffraction patterns reveal two distinct components: cylindrical and spherical. The first dominates at small scattering angles and originates from specific locations within the footprint of the spacecraft antenna on the rings. The second dominates at large scattering angles and originates from the full footprint. We interpret Cassini extinction and scattering observations in the light of the simulation results. We compute and remove contribution of the spherical component to observed scattered signal spectra assuming known particle size distribution. A large residual spectral component is interpreted as contribution of cylindrical (wake) diffraction. Its angular width determines a cylindrical shadow width that depends on the wake parameters (W,H) and the viewing geometry (α,B). Its strength constrains the mean fractional area covered (optical depth), hence constrains the mean wakes spacing. Self-consistent (W,H) are estimated using least-square fit to results from multiple occultations. Example results for observed scattering by several inner A-Ring features suggest particle clusters (wakes) that are few tens of meters wide and several meters thick.

  10. Network meta-analysis of disconnected networks: How dangerous are random baseline treatment effects?

    PubMed

    Béliveau, Audrey; Goring, Sarah; Platt, Robert W; Gustafson, Paul

    2017-12-01

    In network meta-analysis, the use of fixed baseline treatment effects (a priori independent) in a contrast-based approach is regularly preferred to the use of random baseline treatment effects (a priori dependent). That is because, often, there is not a need to model baseline treatment effects, which carry the risk of model misspecification. However, in disconnected networks, fixed baseline treatment effects do not work (unless extra assumptions are made), as there is not enough information in the data to update the prior distribution on the contrasts between disconnected treatments. In this paper, we investigate to what extent the use of random baseline treatment effects is dangerous in disconnected networks. We take 2 publicly available datasets of connected networks and disconnect them in multiple ways. We then compare the results of treatment comparisons obtained from a Bayesian contrast-based analysis of each disconnected network using random normally distributed and exchangeable baseline treatment effects to those obtained from a Bayesian contrast-based analysis of their initial connected network using fixed baseline treatment effects. For the 2 datasets considered, we found that the use of random baseline treatment effects in disconnected networks was appropriate. Because those datasets were not cherry-picked, there should be other disconnected networks that would benefit from being analyzed using random baseline treatment effects. However, there is also a risk for the normality and exchangeability assumption to be inappropriate in other datasets even though we have not observed this situation in our case study. We provide code, so other datasets can be investigated. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    NASA Astrophysics Data System (ADS)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  12. Scaling exponents for ordered maxima

    DOE PAGES

    Ben-Naim, E.; Krapivsky, P. L.; Lemons, N. W.

    2015-12-22

    We study extreme value statistics of multiple sequences of random variables. For each sequence with N variables, independently drawn from the same distribution, the running maximum is defined as the largest variable to date. We compare the running maxima of m independent sequences and investigate the probability S N that the maxima are perfectly ordered, that is, the running maximum of the first sequence is always larger than that of the second sequence, which is always larger than the running maximum of the third sequence, and so on. The probability S N is universal: it does not depend on themore » distribution from which the random variables are drawn. For two sequences, S N~N –1/2, and in general, the decay is algebraic, S N~N –σm, for large N. We analytically obtain the exponent σ 3≅1.302931 as root of a transcendental equation. Moreover, the exponents σ m grow with m, and we show that σ m~m for large m.« less

  13. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  14. Two competing species in super-diffusive dynamical regimes

    NASA Astrophysics Data System (ADS)

    La Cognata, A.; Valenti, D.; Spagnolo, B.; Dubkov, A. A.

    2010-09-01

    The dynamics of two competing species within the framework of the generalized Lotka-Volterra equations, in the presence of multiplicative α-stable Lévy noise sources and a random time dependent interaction parameter, is studied. The species dynamics is characterized by two different dynamical regimes, exclusion of one species and coexistence of both, depending on the values of the interaction parameter, which obeys a Langevin equation with a periodically fluctuating bistable potential and an additive α-stable Lévy noise. The stochastic resonance phenomenon is analyzed for noise sources asymmetrically distributed. Finally, the effects of statistical dependence between multiplicative noise and additive noise on the dynamics of the two species are studied.

  15. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  16. Do siblings always form and evolve simultaneously? Testing the coevality of multiple protostellar systems through SEDs

    NASA Astrophysics Data System (ADS)

    Murillo, N. M.; van Dishoeck, E. F.; Tobin, J. J.; Fedele, D.

    2016-07-01

    Context. Multiplicity is common in field stars and among protostellar systems. Models suggest two paths of formation: turbulent fragmentation and protostellar disk fragmentation. Aims: We attempt to find whether or not the coevality frequency of multiple protostellar systems can help to better understand their formation mechanism. The coevality frequency is determined by constraining the relative evolutionary stages of the components in a multiple system. Methods: Spectral energy distributions (SEDs) for known multiple protostars in Perseus were constructed from literature data. Herschel PACS photometric maps were used to sample the peak of the SED for systems with separations ≥7″, a crucial aspect in determining the evolutionary stage of a protostellar system. Inclination effects and the surrounding envelope and outflows were considered to decouple source geometry from evolution. This together with the shape and derived properties from the SED was used to determine each system's coevality as accurately as possible. SED models were used to examine the frequency of non-coevality that is due to geometry. Results: We find a non-coevality frequency of 33 ± 10% from the comparison of SED shapes of resolved multiple systems. Other source parameters suggest a somewhat lower frequency of non-coevality. The frequency of apparent non-coevality that is due to random inclination angle pairings of model SEDs is 17 ± 0.5%. Observations of the outflow of resolved multiple systems do not suggest significant misalignments within multiple systems. Effects of unresolved multiples on the SED shape are also investigated. Conclusions: We find that one-third of the multiple protostellar systems sampled here are non-coeval, which is more than expected from random geometric orientations. The other two-thirds are found to be coeval. Higher order multiples show a tendency to be non-coeval. The frequency of non-coevality found here is most likely due to formation and enhanced by dynamical evolution.

  17. Computational wavelength resolution for in-line lensless holography: phase-coded diffraction patterns and wavefront group-sparsity

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Shevkunov, Igor; Petrov, Nikolay V.; Egiazarian, Karen

    2017-06-01

    In-line lensless holography is considered with a random phase modulation at the object plane. The forward wavefront propagation is modelled using the Fourier transform with the angular spectrum transfer function. The multiple intensities (holograms) recorded by the sensor are random due to the random phase modulation and noisy with Poissonian noise distribution. It is shown by computational experiments that high-accuracy reconstructions can be achieved with resolution going up to the two thirds of the wavelength. With respect to the sensor pixel size it is a super-resolution with a factor of 32. The algorithm designed for optimal superresolution phase/amplitude reconstruction from Poissonian data is based on the general methodology developed for phase retrieval with a pixel-wise resolution in V. Katkovnik, "Phase retrieval from noisy data based on sparse approximation of object phase and amplitude", http://www.cs.tut.fi/ lasip/DDT/index3.html.

  18. Validation of optical codes based on 3D nanostructures

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2017-05-01

    Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.

  19. Using Temporal Correlations and Full Distributions to Separate Intrinsic and Extrinsic Fluctuations in Biological Systems

    NASA Astrophysics Data System (ADS)

    Hilfinger, Andreas; Chen, Mark; Paulsson, Johan

    2012-12-01

    Studies of stochastic biological dynamics typically compare observed fluctuations to theoretically predicted variances, sometimes after separating the intrinsic randomness of the system from the enslaving influence of changing environments. But variances have been shown to discriminate surprisingly poorly between alternative mechanisms, while for other system properties no approaches exist that rigorously disentangle environmental influences from intrinsic effects. Here, we apply the theory of generalized random walks in random environments to derive exact rules for decomposing time series and higher statistics, rather than just variances. We show for which properties and for which classes of systems intrinsic fluctuations can be analyzed without accounting for extrinsic stochasticity and vice versa. We derive two independent experimental methods to measure the separate noise contributions and show how to use the additional information in temporal correlations to detect multiplicative effects in dynamical systems.

  20. The US business cycle: power law scaling for interacting units with complex internal structure

    NASA Astrophysics Data System (ADS)

    Ormerod, Paul

    2002-11-01

    In the social sciences, there is increasing evidence of the existence of power law distributions. The distribution of recessions in capitalist economies has recently been shown to follow such a distribution. The preferred explanation for this is self-organised criticality. Gene Stanley and colleagues propose an alternative, namely that power law scaling can arise from the interplay between random multiplicative growth and the complex structure of the units composing the system. This paper offers a parsimonious model of the US business cycle based on similar principles. The business cycle, along with long-term growth, is one of the two features which distinguishes capitalism from all previously existing societies. Yet, economics lacks a satisfactory theory of the cycle. The source of cycles is posited in economic theory to be a series of random shocks which are external to the system. In this model, the cycle is an internal feature of the system, arising from the level of industrial concentration of the agents and the interactions between them. The model-in contrast to existing economic theories of the cycle-accounts for the key features of output growth in the US business cycle in the 20th century.

  1. Single-channel 40 Gbit/s digital coherent QAM quantum noise stream cipher transmission over 480 km.

    PubMed

    Yoshida, Masato; Hirooka, Toshihiko; Kasai, Keisuke; Nakazawa, Masataka

    2016-01-11

    We demonstrate the first 40 Gbit/s single-channel polarization-multiplexed, 5 Gsymbol/s, 16 QAM quantum noise stream cipher (QNSC) transmission over 480 km by incorporating ASE quantum noise from EDFAs as well as the quantum shot noise of the coherent state with multiple photons for the random masking of data. By using a multi-bit encoded scheme and digital coherent transmission techniques, secure optical communication with a record data capacity and transmission distance has been successfully realized. In this system, the signal level received by Eve is hidden by both the amplitude and the phase noise. The highest number of masked signals, 7.5 x 10(4), was achieved by using a QAM scheme with FEC, which makes it possible to reduce the output power from the transmitter while maintaining an error free condition for Bob. We have newly measured the noise distribution around I and Q encrypted data and shown experimentally with a data size of as large as 2(25) that the noise has a Gaussian distribution with no correlations. This distribution is suitable for the random masking of data.

  2. A statistical model for analyzing the rotational error of single isocenter for multiple targets technique.

    PubMed

    Chang, Jenghwa

    2017-06-01

    To develop a statistical model that incorporates the treatment uncertainty from the rotational error of the single isocenter for multiple targets technique, and calculates the extra PTV (planning target volume) margin required to compensate for this error. The random vector for modeling the setup (S) error in the three-dimensional (3D) patient coordinate system was assumed to follow a 3D normal distribution with a zero mean, and standard deviations of σ x , σ y , σ z . It was further assumed that the rotation of clinical target volume (CTV) about the isocenter happens randomly and follows a three-dimensional (3D) independent normal distribution with a zero mean and a uniform standard deviation of σ δ . This rotation leads to a rotational random error (R), which also has a 3D independent normal distribution with a zero mean and a uniform standard deviation of σ R equal to the product of σδπ180 and dI⇔T, the distance between the isocenter and CTV. Both (S and R) random vectors were summed, normalized, and transformed to the spherical coordinates to derive the Chi distribution with three degrees of freedom for the radial coordinate of S+R. PTV margin was determined using the critical value of this distribution for a 0.05 significance level so that 95% of the time the treatment target would be covered by the prescription dose. The additional PTV margin required to compensate for the rotational error was calculated as a function of σ R and dI⇔T. The effect of the rotational error is more pronounced for treatments that require high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2-mm PTV margin (or σ x = σ y = σ z = 0.715 mm), a σ R = 0.328 mm will decrease the CTV coverage probability from 95.0% to 90.9%, or an additional 0.2-mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σ R > 0.328 mm will lead to an extra PTV margin that cannot be ignored, and the maximal σ δ that can be ignored is 0.45° (or 0.0079 rad ) for dI⇔T = 50 mm or 0.23° (or 0.004 rad ) for dI⇔T = 100 mm. The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the isocenter and target is large. © 2017 American Association of Physicists in Medicine.

  3. Two Cases of Tuberous Sclerosis Complex Suggestive of Complicating Multifocal Micronodular Pneumocyte Hyperplasia: A Case Report.

    PubMed

    Nishida, Chinatsu; Yatera, Kazuhiro; Kido, Takashi; Noguchi, Shingo; Akata, Kentaro; Hanaka, Minako; Yamasaki, Kei; Hoshino, Teppei; Shimono, Masayuki; Ishimoto, Hiroshi; Sakamoto, Noriho; Mukae, Hiroshi

    Multifocal micronodular pneumocyte hyperplasia (MMPH) is pathologically characterized by multifocal nodular hyperplasia of type Ⅱ pneumocyte-like cells. MMPH is usually complicated with tuberous sclerosis complex (TSC). MMPH patients tend to be asymptomatic or only slightly symptomatic. MMPH tends to progress slowly and needs no treatment. We herein describe two cases of MMPH with its characteristic radiological features and clinical manifestations of TSC. Case 1: a 20-year-old female with definitive TSC in infancy. Chest CT at the age of 18 revealed multiple nodular opacities and ground-glass attenuations in a scattered and random distribution in the bilateral lungs. Case 2: a 44-year-old female with probable TSC at 36 years of age. Chest CT at the age of 43 showed random areas of small ground-glass attenuations, predominantly in the upper lung fields. Case 1 and Case 2 have had no respiratory symptoms or radiographic changes in the recent two years and four years, respectively. Although pathological examinations of the lung were not performed because consent for surgical lung biospies was unobtainable, we considered that these pulmonary manifestations were most likely MMPH with TSC because of these characteristic radiographical findings of multiple nodular opacities and ground-glass attenuations of 10 mm or less in size and their scattered distribution, and because there have been no abnormal laboratory data or changes in their chest radiological findings for years. Neither patient is under treatment for pulmonary lesions. Although MMPH is a rare disease, multiple nodules and ground-glass attenuations on lung imaging findings should be considered as pulmonary manifestations in patients with TSC.

  4. Genetic variation, multiple paternity, and measures of reproductive success in the critically endangered hawksbill turtle (Eretmochelys imbricata).

    PubMed

    González-Garza, Blanca Idalia; Stow, Adam; Sánchez-Teyer, Lorenzo Felipe; Zapata-Pérez, Omar

    2015-12-01

    The Yucatán Peninsula in Mexico contains some of the largest breeding groups of the globally distributed and critically endangered hawksbill turtle (Eretmochelys imbricata). An improved understanding of the breeding system of this species and how its genetic variation is structured among nesting areas is required before the threats to its survival can be properly evaluated. Here, we genotype 1195 hatchlings and 41 nesting females at 12 microsatellite loci to assess levels of multiple paternity, genetic variation and whether individual levels of homozygosity are associated with reproductive success. Of the 50 clutches analyzed, only 6% have multiple paternity. The distribution of pairwise relatedness among nesting localities (rookeries) was not random with elevated within-rookery relatedness, and declining relatedness with geographic distance indicating some natal philopatry. Although there was no strong evidence that particular rookeries had lost allelic variation via drift, younger turtles had significantly lower levels of genetic variation than older turtles, suggesting some loss of genetic variation. At present there is no indication that levels of genetic variation are associated with measures of reproductive success such as clutch size, hatching success, and frequency of infertile eggs.

  5. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  6. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  7. Predicting structures in the Zone of Avoidance

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Colless, Matthew; Kraan-Korteweg, Renée C.; Gottlöber, Stefan

    2017-11-01

    The Zone of Avoidance (ZOA), whose emptiness is an artefact of our Galaxy dust, has been challenging observers as well as theorists for many years. Multiple attempts have been made on the observational side to map this region in order to better understand the local flows. On the theoretical side, however, this region is often simply statistically populated with structures but no real attempt has been made to confront theoretical and observed matter distributions. This paper takes a step forward using constrained realizations (CRs) of the local Universe shown to be perfect substitutes of local Universe-like simulations for smoothed high-density peak studies. Far from generating completely `random' structures in the ZOA, the reconstruction technique arranges matter according to the surrounding environment of this region. More precisely, the mean distributions of structures in a series of constrained and random realizations (RRs) differ: while densities annihilate each other when averaging over 200 RRs, structures persist when summing 200 CRs. The probability distribution function of ZOA grid cells to be highly overdense is a Gaussian with a 15 per cent mean in the random case, while that of the constrained case exhibits large tails. This implies that areas with the largest probabilities host most likely a structure. Comparisons between these predictions and observations, like those of the Puppis 3 cluster, show a remarkable agreement and allow us to assert the presence of the, recently highlighted by observations, Vela supercluster at about 180 h-1 Mpc, right behind the thickest dust layers of our Galaxy.

  8. Pseudorandom number generation using chaotic true orbits of the Bernoulli map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, Asaki, E-mail: saito@fun.ac.jp; Yamaguchi, Akihiro

    We devise a pseudorandom number generator that exactly computes chaotic true orbits of the Bernoulli map on quadratic algebraic integers. Moreover, we describe a way to select the initial points (seeds) for generating multiple pseudorandom binary sequences. This selection method distributes the initial points almost uniformly (equidistantly) in the unit interval, and latter parts of the generated sequences are guaranteed not to coincide. We also demonstrate through statistical testing that the generated sequences possess good randomness properties.

  9. An integrate-over-temperature approach for enhanced sampling.

    PubMed

    Gao, Yi Qin

    2008-02-14

    A simple method is introduced to achieve efficient random walking in the energy space in molecular dynamics simulations which thus enhances the sampling over a large energy range. The approach is closely related to multicanonical and replica exchange simulation methods in that it allows configurations of the system to be sampled in a wide energy range by making use of Boltzmann distribution functions at multiple temperatures. A biased potential is quickly generated using this method and is then used in accelerated molecular dynamics simulations.

  10. Star formation history: Modeling of visual binaries

    NASA Astrophysics Data System (ADS)

    Gebrehiwot, Y. M.; Tessema, S. B.; Malkov, O. Yu.; Kovaleva, D. A.; Sytov, A. Yu.; Tutukov, A. V.

    2018-05-01

    Most stars form in binary or multiple systems. Their evolution is defined by masses of components, orbital separation and eccentricity. In order to understand star formation and evolutionary processes, it is vital to find distributions of physical parameters of binaries. We have carried out Monte Carlo simulations in which we simulate different pairing scenarios: random pairing, primary-constrained pairing, split-core pairing, and total and primary pairing in order to get distributions of binaries over physical parameters at birth. Next, for comparison with observations, we account for stellar evolution and selection effects. Brightness, radius, temperature, and other parameters of components are assigned or calculated according to approximate relations for stars in different evolutionary stages (main-sequence stars, red giants, white dwarfs, relativistic objects). Evolutionary stage is defined as a function of system age and component masses. We compare our results with the observed IMF, binarity rate, and binary mass-ratio distributions for field visual binaries to find initial distributions and pairing scenarios that produce observed distributions.

  11. A microwave scattering model for layered vegetation

    NASA Technical Reports Server (NTRS)

    Karam, Mostafa A.; Fung, Adrian K.; Lang, Roger H.; Chauhan, Narinder S.

    1992-01-01

    A microwave scattering model was developed for layered vegetation based on an iterative solution of the radiative transfer equation up to the second order to account for multiple scattering within the canopy and between the ground and the canopy. The model is designed to operate over a wide frequency range for both deciduous and coniferous forest and to account for the branch size distribution, leaf orientation distribution, and branch orientation distribution for each size. The canopy is modeled as a two-layered medium above a rough interface. The upper layer is the crown containing leaves, stems, and branches. The lower layer is the trunk region modeled as randomly positioned cylinders with a preferred orientation distribution above an irregular soil surface. Comparisons of this model with measurements from deciduous and coniferous forests show good agreements at several frequencies for both like and cross polarizations. Major features of the model needed to realize the agreement include allowance for: (1) branch size distribution, (2) second-order effects, and (3) tree component models valid over a wide range of frequencies.

  12. Distributed effects of methylphenidate on the network structure of the resting brain: a connectomic pattern classification analysis.

    PubMed

    Sripada, Chandra Sekhar; Kessler, Daniel; Welsh, Robert; Angstadt, Michael; Liberzon, Israel; Phan, K Luan; Scott, Clayton

    2013-11-01

    Methylphenidate is a psychostimulant medication that produces improvements in functions associated with multiple neurocognitive systems. To investigate the potentially distributed effects of methylphenidate on the brain's intrinsic network architecture, we coupled resting state imaging with multivariate pattern classification. In a within-subject, double-blind, placebo-controlled, randomized, counterbalanced, cross-over design, 32 healthy human volunteers received either methylphenidate or placebo prior to two fMRI resting state scans separated by approximately one week. Resting state connectomes were generated by placing regions of interest at regular intervals throughout the brain, and these connectomes were submitted for support vector machine analysis. We found that methylphenidate produces a distributed, reliably detected, multivariate neural signature. Methylphenidate effects were evident across multiple resting state networks, especially visual, somatomotor, and default networks. Methylphenidate reduced coupling within visual and somatomotor networks. In addition, default network exhibited decoupling with several task positive networks, consistent with methylphenidate modulation of the competitive relationship between these networks. These results suggest that connectivity changes within and between large-scale networks are potentially involved in the mechanisms by which methylphenidate improves attention functioning. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Mechanical properties of electrospun bilayer fibrous membranes as potential scaffolds for tissue engineering.

    PubMed

    Pu, Juan; Komvopoulos, Kyriakos

    2014-06-01

    Bilayer fibrous membranes of poly(l-lactic acid) (PLLA) were fabricated by electrospinning, using a parallel-disk mandrel configuration that resulted in the sequential deposition of a layer with fibers aligned across the two parallel disks and a layer with randomly oriented fibers, both layers deposited in a single process step. Membrane structure and fiber alignment were characterized by scanning electron microscopy and two-dimensional fast Fourier transform. Because of the intricacies of the generated electric field, bilayer membranes exhibited higher porosity than single-layer membranes consisting of randomly oriented fibers fabricated with a solid-drum collector. However, despite their higher porosity, bilayer membranes demonstrated generally higher elastic modulus, yield strength and toughness than single-layer membranes with random fibers. Bilayer membrane deformation at relatively high strain rates comprised multiple abrupt microfracture events characterized by discontinuous fiber breakage. Bilayer membrane elongation yielded excessive necking of the layer with random fibers and remarkable fiber stretching (on the order of 400%) in the layer with fibers aligned in the stress direction. In addition, fibers in both layers exhibited multiple localized necking, attributed to the nonuniform distribution of crystalline phases in the fibrillar structure. The high membrane porosity, good mechanical properties, and good biocompatibility and biodegradability of PLLA (demonstrated in previous studies) make the present bilayer membranes good scaffold candidates for a wide range of tissue engineering applications. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  14. Randomly displaced phase distribution design and its advantage in page-data recording of Fourier transform holograms.

    PubMed

    Emoto, Akira; Fukuda, Takashi

    2013-02-20

    For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.

  15. Phase transitions in the distribution of the Andreev conductance of superconductor-metal junctions with multiple transverse modes.

    PubMed

    Damle, Kedar; Majumdar, Satya N; Tripathi, Vikram; Vivo, Pierpaolo

    2011-10-21

    We compute analytically the full distribution of Andreev conductance G(NS) of a metal-superconductor interface with a large number N(c) of transverse modes, using a random matrix approach. The probability distribution P(G(NS),N(c) in the limit of large N(c) displays a Gaussian behavior near the average value =(2-√2)N(c) and asymmetric power-law tails in the two limits of very small and very large G(NS). In addition, we find a novel third regime sandwiched between the central Gaussian peak and the power-law tail for large G(NS). Weakly nonanalytic points separate these four regimes-these are shown to be consequences of three phase transitions in an associated Coulomb gas problem. © 2011 American Physical Society

  16. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  17. New constraints on modelling the random magnetic field of the MW

    NASA Astrophysics Data System (ADS)

    Beck, Marcus C.; Beck, Alexander M.; Beck, Rainer; Dolag, Klaus; Strong, Andrew W.; Nielaba, Peter

    2016-05-01

    We extend the description of the isotropic and anisotropic random component of the small-scale magnetic field within the existing magnetic field model of the Milky Way from Jansson & Farrar, by including random realizations of the small-scale component. Using a magnetic-field power spectrum with Gaussian random fields, the NE2001 model for the thermal electrons and the Galactic cosmic-ray electron distribution from the current GALPROP model we derive full-sky maps for the total and polarized synchrotron intensity as well as the Faraday rotation-measure distribution. While previous work assumed that small-scale fluctuations average out along the line-of-sight or which only computed ensemble averages of random fields, we show that these fluctuations need to be carefully taken into account. Comparing with observational data we obtain not only good agreement with 408 MHz total and WMAP7 22 GHz polarized intensity emission maps, but also an improved agreement with Galactic foreground rotation-measure maps and power spectra, whose amplitude and shape strongly depend on the parameters of the random field. We demonstrate that a correlation length of 0≈22 pc (05 pc being a 5σ lower limit) is needed to match the slope of the observed power spectrum of Galactic foreground rotation-measure maps. Using multiple realizations allows us also to infer errors on individual observables. We find that previously-used amplitudes for random and anisotropic random magnetic field components need to be rescaled by factors of ≈0.3 and 0.6 to account for the new small-scale contributions. Our model predicts a rotation measure of -2.8±7.1 rad/m2 and 04.4±11. rad/m2 for the north and south Galactic poles respectively, in good agreement with observations. Applying our model to deflections of ultra-high-energy cosmic rays we infer a mean deflection of ≈3.5±1.1 degree for 60 EeV protons arriving from CenA.

  18. Resolvent-Techniques for Multiple Exercise Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Sören, E-mail: christensen@math.uni-kiel.de; Lempa, Jukka, E-mail: jukka.lempa@hioa.no

    2015-02-15

    We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristicsmore » of the problems can be identified more explicitly. We illustrate the main results with explicit examples.« less

  19. Experimental study on the statistic characteristics of a 3x3 RF MIMO channel over a single conventional multimode fiber.

    PubMed

    Lei, Yi; Li, Jianqiang; Wu, Rui; Fan, Yuting; Fu, Songnian; Yin, Feifei; Dai, Yitang; Xu, Kun

    2017-06-01

    Based on the observed random fluctuation phenomenon of speckle pattern across multimode fiber (MMF) facet and received optical power distribution across three output ports, we experimentally investigate the statistic characteristics of a 3×3 radio frequency multiple-input multiple-output (MIMO) channel enabled by mode division multiplexing in a conventional 50 µm MMF using non-mode-selective three-dimensional waveguide photonic lanterns as mode multiplexer and demultiplexer. The impacts of mode coupling on the MIMO channel coefficients, channel matrix, and channel capacity have been analyzed over different fiber lengths. The results indicate that spatial multiplexing benefits from the greater fiber length with stronger mode coupling, despite a higher optical loss.

  20. Multiple trapping on a comb structure as a model of electron transport in disordered nanostructured semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibatov, R. T., E-mail: ren-sib@bk.ru; Morozova, E. V., E-mail: kat-valezhanina@yandex.ru

    2015-05-15

    A model of dispersive transport in disordered nanostructured semiconductors has been proposed taking into account the percolation structure of a sample and joint action of several mechanisms. Topological and energy disorders have been simultaneously taken into account within the multiple trapping model on a comb structure modeling the percolation character of trajectories. The joint action of several mechanisms has been described within random walks with a mixture of waiting time distributions. Integral transport equations with fractional derivatives have been obtained for an arbitrary density of localized states. The kinetics of the transient current has been calculated within the proposed newmore » model in order to analyze time-of-flight experiments for nanostructured semiconductors.« less

  1. A Multiple Sphere T-Matrix Fortran Code for Use on Parallel Computer Clusters

    NASA Technical Reports Server (NTRS)

    Mackowski, D. W.; Mishchenko, M. I.

    2011-01-01

    A general-purpose Fortran-90 code for calculation of the electromagnetic scattering and absorption properties of multiple sphere clusters is described. The code can calculate the efficiency factors and scattering matrix elements of the cluster for either fixed or random orientation with respect to the incident beam and for plane wave or localized- approximation Gaussian incident fields. In addition, the code can calculate maps of the electric field both interior and exterior to the spheres.The code is written with message passing interface instructions to enable the use on distributed memory compute clusters, and for such platforms the code can make feasible the calculation of absorption, scattering, and general EM characteristics of systems containing several thousand spheres.

  2. Interpretation of diffusion coefficients in nanostructured materials from random walk numerical simulation.

    PubMed

    Anta, Juan A; Mora-Seró, Iván; Dittrich, Thomas; Bisquert, Juan

    2008-08-14

    We make use of the numerical simulation random walk (RWNS) method to compute the "jump" diffusion coefficient of electrons in nanostructured materials via mean-square displacement. First, a summary of analytical results is given that relates the diffusion coefficient obtained from RWNS to those in the multiple-trapping (MT) and hopping models. Simulations are performed in a three-dimensional lattice of trap sites with energies distributed according to an exponential distribution and with a step-function distribution centered at the Fermi level. It is observed that once the stationary state is reached, the ensemble of particles follow Fermi-Dirac statistics with a well-defined Fermi level. In this stationary situation the diffusion coefficient obeys the theoretical predictions so that RWNS effectively reproduces the MT model. Mobilities can be also computed when an electrical bias is applied and they are observed to comply with the Einstein relation when compared with steady-state diffusion coefficients. The evolution of the system towards the stationary situation is also studied. When the diffusion coefficients are monitored along simulation time a transition from anomalous to trap-limited transport is observed. The nature of this transition is discussed in terms of the evolution of electron distribution and the Fermi level. All these results will facilitate the use of RW simulation and related methods to interpret steady-state as well as transient experimental techniques.

  3. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    NASA Astrophysics Data System (ADS)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.

  4. Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.

    NASA Astrophysics Data System (ADS)

    Stossel, Bryan Joseph

    1995-01-01

    Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.

  5. Numerical Modeling Describing the Effects of Heterogeneous Distributions of Asperities on the Quasi-static Evolution of Frictional Slip

    NASA Astrophysics Data System (ADS)

    Selvadurai, P. A.; Parker, J. M.; Glaser, S. D.

    2017-12-01

    A better understanding of how slip accumulates along faults and its relation to the breakdown of shear stress is beneficial to many engineering disciplines, such as, hydraulic fracture and understanding induced seismicity (among others). Asperities forming along a preexisting fault resist the relative motion of the two sides of the interface and occur due to the interaction of the surface topographies. Here, we employ a finite element model to simulate circular partial slip asperities along a nominally flat frictional interface. Shear behavior of our partial slip asperity model closely matched the theory described by Cattaneo. The asperity model was employed to simulate a small section of an experimental fault formed between two bodies of polymethyl methacrylate, which consisted of multiple asperities whose location and sizes were directly measured using a pressure sensitive film. The quasi-static shear behavior of the interface was modeled for cyclical loading conditions, and the frictional dissipation (hysteresis) was normal stress dependent. We further our understanding by synthetically modeling lognormal size distributions of asperities that were randomly distributed in space. Synthetic distributions conserved the real contact area and aspects of the size distributions from the experimental case, allowing us to compare the constitutive behaviors based solely on spacing effects. Traction-slip behavior of the experimental interface appears to be considerably affected by spatial clustering of asperities that was not present in the randomly spaced, synthetic asperity distributions. Estimates of bulk interfacial shear stiffness were determined from the constitutive traction-slip behavior and were comparable to the theoretical estimates of multi-contact interfaces with non-interacting asperities.

  6. r.randomwalk v1.0, a multi-functional conceptual tool for mass movement routing

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Krenn, J.; Chu, H.-J.

    2015-09-01

    We introduce r.randomwalk, a flexible and multi-functional open source tool for backward- and forward-analyses of mass movement propagation. r.randomwalk builds on GRASS GIS, the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are: (i) multiple break criteria can be combined to compute an impact indicator score, (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter settings, resulting in an impact indicator index in the range 0-1, (iii) built-in functions for validation and visualization of the results are provided, (iv) observed landslides can be back-analyzed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk (i) for a single event, the Acheron Rock Avalanche in New Zealand, (ii) for landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) for lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.

  7. r.randomwalk v1, a multi-functional conceptual tool for mass movement routing

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Krenn, J.; Chu, H.-J.

    2015-12-01

    We introduce r.randomwalk, a flexible and multi-functional open-source tool for backward and forward analyses of mass movement propagation. r.randomwalk builds on GRASS GIS (Geographic Resources Analysis Support System - Geographic Information System), the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are (i) multiple break criteria can be combined to compute an impact indicator score; (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter sets, resulting in an impact indicator index in the range 0-1; (iii) built-in functions for validation and visualization of the results are provided; (iv) observed landslides can be back analysed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk for (i) a single event, the Acheron rock avalanche in New Zealand; (ii) landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.

  8. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  9. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  10. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  11. Continuous operation of four-state continuous-variable quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Ichikawa, Tsubasa; Hirano, Takuya; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro

    2016-10-01

    We report on the development of continuous-variable quantum key distribution (CV-QKD) system that are based on discrete quadrature amplitude modulation (QAM) and homodyne detection of coherent states of light. We use a pulsed light source whose wavelength is 1550 nm and repetition rate is 10 MHz. The CV-QKD system can continuously generate secret key which is secure against entangling cloner attack. Key generation rate is 50 kbps when the quantum channel is a 10 km optical fiber. The CV-QKD system we have developed utilizes the four-state and post-selection protocol [T. Hirano, et al., Phys. Rev. A 68, 042331 (2003).]; Alice randomly sends one of four states {|+/-α⟩,|+/-𝑖α⟩}, and Bob randomly performs x- or p- measurement by homodyne detection. A commercially available balanced receiver is used to realize shot-noise-limited pulsed homodyne detection. GPU cards are used to accelerate the software-based post-processing. We use a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification.

  12. Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.

    PubMed

    Verde, Pablo E

    2010-12-30

    In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.

  13. Mathematical models application for mapping soils spatial distribution on the example of the farm from the North of Udmurt Republic of Russia

    NASA Astrophysics Data System (ADS)

    Dokuchaev, P. M.; Meshalkina, J. L.; Yaroslavtsev, A. M.

    2018-01-01

    Comparative analysis of soils geospatial modeling using multinomial logistic regression, decision trees, random forest, regression trees and support vector machines algorithms was conducted. The visual interpretation of the digital maps obtained and their comparison with the existing map, as well as the quantitative assessment of the individual soil groups detection overall accuracy and of the models kappa showed that multiple logistic regression, support vector method, and random forest models application with spatial prediction of the conditional soil groups distribution can be reliably used for mapping of the study area. It has shown the most accurate detection for sod-podzolics soils (Phaeozems Albic) lightly eroded and moderately eroded soils. In second place, according to the mean overall accuracy of the prediction, there are sod-podzolics soils - non-eroded and warp one, as well as sod-gley soils (Umbrisols Gleyic) and alluvial soils (Fluvisols Dystric, Umbric). Heavy eroded sod-podzolics and gray forest soils (Phaeozems Albic) were detected by methods of automatic classification worst of all.

  14. Optimized multiple quantum MAS lineshape simulations in solid state NMR

    NASA Astrophysics Data System (ADS)

    Brouwer, William J.; Davis, Michael C.; Mueller, Karl T.

    2009-10-01

    The majority of nuclei available for study in solid state Nuclear Magnetic Resonance have half-integer spin I>1/2, with corresponding electric quadrupole moment. As such, they may couple with a surrounding electric field gradient. This effect introduces anisotropic line broadening to spectra, arising from distinct chemical species within polycrystalline solids. In Multiple Quantum Magic Angle Spinning (MQMAS) experiments, a second frequency dimension is created, devoid of quadrupolar anisotropy. As a result, the center of gravity of peaks in the high resolution dimension is a function of isotropic second order quadrupole and chemical shift alone. However, for complex materials, these parameters take on a stochastic nature due in turn to structural and chemical disorder. Lineshapes may still overlap in the isotropic dimension, complicating the task of assignment and interpretation. A distributed computational approach is presented here which permits simulation of the two-dimensional MQMAS spectrum, generated by random variates from model distributions of isotropic chemical and quadrupole shifts. Owing to the non-convex nature of the residual sum of squares (RSS) function between experimental and simulated spectra, simulated annealing is used to optimize the simulation parameters. In this manner, local chemical environments for disordered materials may be characterized, and via a re-sampling approach, error estimates for parameters produced. Program summaryProgram title: mqmasOPT Catalogue identifier: AEEC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3650 No. of bytes in distributed program, including test data, etc.: 73 853 Distribution format: tar.gz Programming language: C, OCTAVE Computer: UNIX/Linux Operating system: UNIX/Linux Has the code been vectorised or parallelized?: Yes RAM: Example: (1597 powder angles) × (200 Samples) × (81 F2 frequency pts) × (31 F1 frequency points) = 3.5M, SMP AMD opteron Classification: 2.3 External routines: OCTAVE ( http://www.gnu.org/software/octave/), GNU Scientific Library ( http://www.gnu.org/software/gsl/), OPENMP ( http://openmp.org/wp/) Nature of problem: The optimal simulation and modeling of multiple quantum magic angle spinning NMR spectra, for general systems, especially those with mild to significant disorder. The approach outlined and implemented in C and OCTAVE also produces model parameter error estimates. Solution method: A model for each distinct chemical site is first proposed, for the individual contribution of crystallite orientations to the spectrum. This model is averaged over all powder angles [1], as well as the (stochastic) parameters; isotropic chemical shift and quadrupole coupling constant. The latter is accomplished via sampling from a bi-variate Gaussian distribution, using the Box-Muller algorithm to transform Sobol (quasi) random numbers [2]. A simulated annealing optimization is performed, and finally the non-linear jackknife [3] is applied in developing model parameter error estimates. Additional comments: The distribution contains a script, mqmasOpt.m, which runs in the OCTAVE language workspace. Running time: Example: (1597 powder angles) × (200 Samples) × (81 F2 frequency pts) × (31 F1 frequency points) = 58.35 seconds, SMP AMD opteron. References:S.K. Zaremba, Annali di Matematica Pura ed Applicata 73 (1966) 293. H. Niederreiter, Random Number Generation and Quasi-Monte Carlo Methods, SIAM, 1992. T. Fox, D. Hinkley, K. Larntz, Technometrics 22 (1980) 29.

  15. Stability and performance of propulsion control systems with distributed control architectures and failures

    NASA Astrophysics Data System (ADS)

    Belapurkar, Rohit K.

    Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.

  16. Optical cryptography topology based on a three-dimensional particle-like distribution and diffractive imaging.

    PubMed

    Chen, Wen; Chen, Xudong

    2011-05-09

    In recent years, coherent diffractive imaging has been considered as a promising alternative for information retrieval instead of conventional interference methods. Coherent diffractive imaging using the X-ray light source has opened up a new research perspective for the measurement of non-crystalline and biological specimens, and can achieve unprecedentedly high resolutions. In this paper, we show how a three-dimensional (3D) particle-like distribution and coherent diffractive imaging can be applied for a study of optical cryptography. An optical multiple-random-phase-mask encoding approach is used, and the plaintext is considered as a series of particles distributed in a 3D space. A topology concept is also introduced into the proposed optical cryptosystem. During image decryption, a retrieval algorithm is developed to extract the plaintext from the ciphertexts. In addition, security and advantages of the proposed optical cryptography topology are also analyzed. © 2011 Optical Society of America

  17. Probabilistic measures of persistence and extinction in measles (meta)populations.

    PubMed

    Gunning, Christian E; Wearing, Helen J

    2013-08-01

    Persistence and extinction are fundamental processes in ecological systems that are difficult to accurately measure due to stochasticity and incomplete observation. Moreover, these processes operate on multiple scales, from individual populations to metapopulations. Here, we examine an extensive new data set of measles case reports and associated demographics in pre-vaccine era US cities, alongside a classic England & Wales data set. We first infer the per-population quasi-continuous distribution of log incidence. We then use stochastic, spatially implicit metapopulation models to explore the frequency of rescue events and apparent extinctions. We show that, unlike critical community size, the inferred distributions account for observational processes, allowing direct comparisons between metapopulations. The inferred distributions scale with population size. We use these scalings to estimate extinction boundary probabilities. We compare these predictions with measurements in individual populations and random aggregates of populations, highlighting the importance of medium-sized populations in metapopulation persistence. © 2013 John Wiley & Sons Ltd/CNRS.

  18. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  19. Underestimation of Variance of Predicted Health Utilities Derived from Multiattribute Utility Instruments.

    PubMed

    Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M

    2017-04-01

    Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.

  20. Simulation of Crack Propagation in Engine Rotating Components under Variable Amplitude Loading

    NASA Technical Reports Server (NTRS)

    Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.

    1998-01-01

    The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability ]or a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.

  1. Exact and Approximate Statistical Inference for Nonlinear Regression and the Estimating Equation Approach.

    PubMed

    Demidenko, Eugene

    2017-09-01

    The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.

  2. Motion of Euglena gracilis: Active fluctuations and velocity distribution

    NASA Astrophysics Data System (ADS)

    Romanczuk, P.; Romensky, M.; Scholz, D.; Lobaskin, V.; Schimansky-Geier, L.

    2015-07-01

    We study the velocity distribution of unicellular swimming algae Euglena gracilis using optical microscopy and active Brownian particle theory. To characterize a peculiar feature of the experimentally observed distribution at small velocities we use the concept of active fluctuations, which was recently proposed for the description of stochastically self-propelled particles [Romanczuk, P. and Schimansky-Geier, L., Phys. Rev. Lett. 106, 230601 (2011)]. In this concept, the fluctuating forces arise due to internal random performance of the propulsive motor. The fluctuating forces are directed in parallel to the heading direction, in which the propulsion acts. In the theory, we introduce the active motion via the depot model [Schweitzer, et al., Phys. Rev. Lett. 80(23), 5044 (1998)]. We demonstrate that the theoretical predictions based on the depot model with active fluctuations are consistent with the experimentally observed velocity distributions. In addition to the model with additive active noise, we obtain theoretical results for a constant propulsion with multiplicative noise.

  3. Smoking cessation and reduction in schizophrenia (SCARIS) with e-cigarette: study protocol for a randomized control trial.

    PubMed

    Caponnetto, Pasquale; Polosa, Riccardo; Auditore, Roberta; Minutolo, Giuseppe; Signorelli, Maria; Maglia, Marilena; Alamo, Angela; Palermo, Filippo; Aguglia, Eugenio

    2014-03-22

    It is well established in studies across several countries that tobacco smoking is more prevalent among schizophrenic patients than the general population. Electronic cigarettes are becoming increasingly popular with smokers worldwide. To date there are no large randomized trials of electronic cigarettes in schizophrenic smokers. A well-designed trial is needed to compare efficacy and safety of these products in this special population. We have designed a randomized controlled trial investigating the efficacy and safety of electronic cigarette. The trial will take the form of a prospective 12-month randomized clinical study to evaluate smoking reduction, smoking abstinence and adverse events in schizophrenic smokers not intending to quit. We will also monitor quality of life, neurocognitive functioning and measure participants' perception and satisfaction of the product. A ≥50% reduction in the number of cigarettes/day from baseline, will be calculated at each study visit ("reducers"). Abstinence from smoking will be calculated at each study visit ("quitters"). Smokers who leave the study protocol before its completion and will carry out the Early Termination Visit or who will not satisfy the criteria of "reducers" and "quitters" will be defined "non responders". The differences of continuous variables between the three groups will be evaluated with the Kruskal-Wallis Test, followed by the Dunn multiple comparison test. The differences between the three groups for normally distributed data will be evaluated with ANOVA test one way, followed by the Newman-Keuls multiple comparison test. The normality of the distribution will be evaluated with the Kolmogorov-Smirnov test. Any correlations between the variables under evaluation will be assessed by Spearman r correlation. To compare qualitative data will be used the Chi-square test. The main strengths of the SCARIS study are the following: it's the first large RCT on schizophrenic patient, involving in and outpatient, evaluating the effect of a three-arm study design, and a long term of follow-up (52-weeks).The goal is to propose an effective intervention to reduce the risk of tobacco smoking, as a complementary tool to treat tobacco addiction in schizophrenia. ClinicalTrials.gov, NCT01979796.

  4. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  5. Pigmented skin lesion detection using random forest and wavelet-based texture

    NASA Astrophysics Data System (ADS)

    Hu, Ping; Yang, Tie-jun

    2016-10-01

    The incidence of cutaneous malignant melanoma, a disease of worldwide distribution and is the deadliest form of skin cancer, has been rapidly increasing over the last few decades. Because advanced cutaneous melanoma is still incurable, early detection is an important step toward a reduction in mortality. Dermoscopy photographs are commonly used in melanoma diagnosis and can capture detailed features of a lesion. A great variability exists in the visual appearance of pigmented skin lesions. Therefore, in order to minimize the diagnostic errors that result from the difficulty and subjectivity of visual interpretation, an automatic detection approach is required. The objectives of this paper were to propose a hybrid method using random forest and Gabor wavelet transformation to accurately differentiate which part belong to lesion area and the other is not in a dermoscopy photographs and analyze segmentation accuracy. A random forest classifier consisting of a set of decision trees was used for classification. Gabor wavelets transformation are the mathematical model of visual cortical cells of mammalian brain and an image can be decomposed into multiple scales and multiple orientations by using it. The Gabor function has been recognized as a very useful tool in texture analysis, due to its optimal localization properties in both spatial and frequency domain. Texture features based on Gabor wavelets transformation are found by the Gabor filtered image. Experiment results indicate the following: (1) the proposed algorithm based on random forest outperformed the-state-of-the-art in pigmented skin lesions detection (2) and the inclusion of Gabor wavelet transformation based texture features improved segmentation accuracy significantly.

  6. Using Nucleon Multiplicities to Analyze Anti-Neutrino Interactions with Nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elkins, Miranda J.

    The most commonly used, simple interaction models have not accurately described the nuclear effects on either neutrino-nucleus or anti-neutrino-nucleus interactions. Comparison of data collected by the MINERvA experiment and these models shows a discrepancy in the reconstructed hadronic energy distribution at momentum transfers below 0.8 GeV. Two nuclear model effects that were previously not modeled are possible culprits of this discrepancy. The first is known as random-phase-approximation and the second is the addition of a meson exchange current process, also known as two-particle two-hole due to its result in two particles leaving the nucleus with two holes left in theirmore » place. For the first time a neutron counting software algorithm has been created and used to compare the multiplicity and spatial distributions of neutrons between the simulation and data. There is localized sensitivity to the RPA and 2p2h effects and both help the simulation better describe the data. Ad ditional systematic or model effects are present which cause the simulation to overproduce neutrons, and potential causes are discussed.« less

  7. Using hidden Markov models to align multiple sequences.

    PubMed

    Mount, David W

    2009-07-01

    A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.

  8. A structured framework for assessing sensitivity to missing data assumptions in longitudinal clinical trials.

    PubMed

    Mallinckrodt, C H; Lin, Q; Molenberghs, M

    2013-01-01

    The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.

  9. A random effects meta-analysis model with Box-Cox transformation.

    PubMed

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.

  10. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  11. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  12. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  13. Analysis of foliage effects on mobile propagation in dense urban environments

    NASA Astrophysics Data System (ADS)

    Bronshtein, Alexander; Mazar, Reuven; Lu, I.-Tai

    2000-07-01

    Attempts to reduce the interference level and to increase the spectral efficiency of cellular radio communication systems operating in dense urban and suburban areas lead to the microcellular approach with a consequent requirement to lower antenna heights. In large metropolitan areas having high buildings this requirement causes a situation where the transmitting and receiving antennas are both located below the rooftops, and the city street acts as a type of a waveguiding channel for the propagating signal. In this work, the city street is modeled as a random multislit waveguide with randomly distributed regions of foliage parallel to the building boundaries. The statistical propagation characteristics are expressed in terms of multiple ray-fields approaching the observer. Algorithms for predicting the path-loss along the waveguide and for computing the transverse field structure are presented.

  14. Collimator of multiple plates with axially aligned identical random arrays of apertures

    NASA Technical Reports Server (NTRS)

    Hoover, R. B.; Underwood, J. H. (Inventor)

    1973-01-01

    A collimator is disclosed for examining the spatial location of distant sources of radiation and for imaging by projection, small, near sources of radiation. The collimator consists of a plurality of plates, all of which are pierced with an identical random array of apertures. The plates are mounted perpendicular to a common axis, with like apertures on consecutive plates axially aligned so as to form radiation channels parallel to the common axis. For near sources, the collimator is interposed between the source and a radiation detector and is translated perpendicular to the common axis so as to project radiation traveling parallel to the common axis incident to the detector. For far sources the collimator is scanned by rotating it in elevation and azimuth with a detector to determine the angular distribution of the radiation from the source.

  15. Numerical Simulations of Inclusion Behavior in Gas-Stirred Ladles

    NASA Astrophysics Data System (ADS)

    Lou, Wentao; Zhu, Miaoyong

    2013-06-01

    A computation fluid dynamics-population balance model (CFD-PBM) coupled model has been proposed to investigate the bubbly plume flow and inclusion behavior including growth, size distribution, and removal in gas-stirred ladles, and some new and important phenomena and mechanisms were presented. For the bubbly plume flow, a modified k- ɛ model with extra source terms to account for the bubble-induced turbulence was adopted to model the turbulence, and the bubble turbulent dispersion force was taken into account to predict gas volume fraction distribution in the turbulent gas-stirred system. For inclusion behavior, the phenomena of inclusions turbulent random motion, bubbles wake, and slag eye forming on the molten steel surface were considered. In addition, the multiple mechanisms both that promote inclusion growth due to inclusion-inclusion collision caused by turbulent random motion, shear rate in turbulent eddy, and difference inclusion Stokes velocities, and the mechanisms that promote inclusion removal due to bubble-inclusion turbulence random collision, bubble-inclusion turbulent shear collision, bubble-inclusion buoyancy collision, inclusion own floatation near slag-metal interface, bubble wake capture, and wall adhesion were investigated. The importance of different mechanisms and total inclusion removal ratio under different conditions, and the distribution of inclusion number densities in ladle, were discussed and clarified. The results show that at a low gas flow rate, the inclusion growth is mainly attributed to both turbulent shear collision and Stokes collision, which is notably affected by the Stokes collision efficiency, and the inclusion removal is mainly attributed to the bubble-inclusion buoyancy collision and inclusion own floatation near slag-metal interface. At a higher gas flow rate, the inclusions appear as turbulence random motion in bubbly plume zone, and both the inclusion-inclusion and inclusion-bubble turbulent random collisions become important for inclusion growth and removal. With the increase of the gas flow rate, the total removal ratio increases, but when the gas flow rate exceeds 200 NL/min in 150-ton ladle, the total removal ration almost does not change. For the larger size inclusions, the number density in bubbly plume zone is less than that in the sidewall recirculation zones, but for the small size inclusions, the distribution of number density shows the opposite trend.

  16. Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation Conditions

    DTIC Science & Technology

    2009-03-01

    IN WIRELESS SENSOR NETWORKS WITH RANDOMLY DISTRIBUTED ELEMENTS UNDER MULTIPATH PROPAGATION CONDITIONS by Georgios Tsivgoulis March 2009...COVERED Engineer’s Thesis 4. TITLE Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation...the non-line-of-sight information. 15. NUMBER OF PAGES 111 14. SUBJECT TERMS Wireless Sensor Network , Direction of Arrival, DOA, Random

  17. Spatially-Distributed Cost–Effectiveness Analysis Framework to Control Phosphorus from Agricultural Diffuse Pollution

    PubMed Central

    Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N.; Meng, Fande

    2015-01-01

    Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program–FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ‘‘best approach” depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds. PMID:26313561

  18. Spatially-Distributed Cost-Effectiveness Analysis Framework to Control Phosphorus from Agricultural Diffuse Pollution.

    PubMed

    Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N; Meng, Fande

    2015-01-01

    Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program-FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ''best approach" depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds.

  19. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  20. Random Plant Viral Variants Attain Temporal Advantages During Systemic Infections and in Turn Resist other Variants of the Same Virus.

    PubMed

    Zhang, Xiao-Feng; Guo, Jiangbo; Zhang, Xiuchun; Meulia, Tea; Paul, Pierce; Madden, Laurence V; Li, Dawei; Qu, Feng

    2015-10-20

    Infection of plants with viruses containing multiple variants frequently leads to dominance by a few random variants in the systemically infected leaves (SLs), for which a plausible explanation is lacking. We show here that SL dominance by a given viral variant is adequately explained by its fortuitous lead in systemic spread, coupled with its resistance to superinfection by other variants. We analyzed the fate of a multi-variant turnip crinkle virus (TCV) population in Arabidopsis and N. benthamiana plants. Both wild-type and RNA silencing-defective plants displayed a similar pattern of random dominance by a few variant genotypes, thus discounting a prominent role for RNA silencing. When introduced to plants sequentially as two subpopulations, a twelve-hour head-start was sufficient for the first set to dominate. Finally, SLs of TCV-infected plants became highly resistant to secondary invasions of another TCV variant. We propose that random distribution of variant foci on inoculated leaves allows different variants to lead systemic movement in different plants. The leading variants then colonize large areas of SLs, and resist the superinfection of lagging variants in the same areas. In conclusion, superinfection resistance is the primary driver of random enrichment of viral variants in systemically infected plants.

  1. Reliability investigation of high-k/metal gate in nMOSFETs by three-dimensional kinetic Monte-Carlo simulation with multiple trap interactions

    NASA Astrophysics Data System (ADS)

    Li, Yun; Jiang, Hai; Lun, Zhiyuan; Wang, Yijiao; Huang, Peng; Hao, Hao; Du, Gang; Zhang, Xing; Liu, Xiaoyan

    2016-04-01

    Degradation behaviors in the high-k/metal gate stacks of nMOSFETs are investigated by three-dimensional (3D) kinetic Monte-Carlo (KMC) simulation with multiple trap coupling. Novel microscopic mechanisms are simultaneously considered in a compound system: (1) trapping/detrapping from/to substrate/gate; (2) trapping/detrapping to other traps; (3) trap generation and recombination. Interacting traps can contribute to random telegraph noise (RTN), bias temperature instability (BTI), and trap-assisted tunneling (TAT). Simulation results show that trap interaction induces higher probability and greater complexity in trapping/detrapping processes and greatly affects the characteristics of RTN and BTI. Different types of trap distribution cause largely different behaviors of RTN, BTI, and TAT. TAT currents caused by multiple trap coupling are sensitive to the gate voltage. Moreover, trap generation and recombination have great effects on the degradation of HfO2-based nMOSFETs under a large stress.

  2. On the Wigner law in dilute random matrices

    NASA Astrophysics Data System (ADS)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  3. Evaluating the Use of Random Distribution Theory to Introduce Statistical Inference Concepts to Business Students

    ERIC Educational Resources Information Center

    Larwin, Karen H.; Larwin, David A.

    2011-01-01

    Bootstrapping methods and random distribution methods are increasingly recommended as better approaches for teaching students about statistical inference in introductory-level statistics courses. The authors examined the effect of teaching undergraduate business statistics students using random distribution and bootstrapping simulations. It is the…

  4. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  5. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  6. Mendelian Randomization.

    PubMed

    Grover, Sandeep; Del Greco M, Fabiola; Stein, Catherine M; Ziegler, Andreas

    2017-01-01

    Confounding and reverse causality have prevented us from drawing meaningful clinical interpretation even in well-powered observational studies. Confounding may be attributed to our inability to randomize the exposure variable in observational studies. Mendelian randomization (MR) is one approach to overcome confounding. It utilizes one or more genetic polymorphisms as a proxy for the exposure variable of interest. Polymorphisms are randomly distributed in a population, they are static throughout an individual's lifetime, and may thus help in inferring directionality in exposure-outcome associations. Genome-wide association studies (GWAS) or meta-analyses of GWAS are characterized by large sample sizes and the availability of many single nucleotide polymorphisms (SNPs), making GWAS-based MR an attractive approach. GWAS-based MR comes with specific challenges, including multiple causality. Despite shortcomings, it still remains one of the most powerful techniques for inferring causality.With MR still an evolving concept with complex statistical challenges, the literature is relatively scarce in terms of providing working examples incorporating real datasets. In this chapter, we provide a step-by-step guide for causal inference based on the principles of MR with a real dataset using both individual and summary data from unrelated individuals. We suggest best possible practices and give recommendations based on the current literature.

  7. High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems

    NASA Astrophysics Data System (ADS)

    Bose, S. K.

    1980-12-01

    Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.

  8. Discovering non-random segregation of sister chromatids: the naïve treatment of a premature discovery

    PubMed Central

    Lark, Karl G.

    2013-01-01

    The discovery of non-random chromosome segregation (Figure 1) is discussed from the perspective of what was known in 1965 and 1966. The distinction between daughter, parent, or grandparent strands of DNA was developed in a bacterial system and led to the discovery that multiple copies of DNA elements of bacteria are not distributed randomly with respect to the age of the template strand. Experiments with higher eukaryotic cells demonstrated that during mitosis Mendel’s laws were violated; and the initial serendipitous choice of eukaryotic cell system led to the striking example of non-random segregation of parent and grandparent DNA template strands in primary cultures of cells derived from mouse embryos. Attempts to extrapolate these findings to established tissue culture lines demonstrated that the property could be lost. Experiments using plant root tips demonstrated that the phenomenon exists in plants and that it was, at some level, under genetic control. Despite publication in major journals and symposia (Lark et al., 1966, 1967; Lark, 1967, 1969a,b,c) the potential implications of these findings were ignored for several decades. Here we explore possible reasons for the pre-maturity (Stent, 1972) of this discovery. PMID:23378946

  9. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  10. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    PubMed Central

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  11. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  12. Mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects for longitudinal-competing risks data analysis.

    PubMed

    Lu, Tao; Wang, Min; Liu, Guangying; Dong, Guang-Hui; Qian, Feng

    2016-01-01

    It is well known that there is strong relationship between HIV viral load and CD4 cell counts in AIDS studies. However, the relationship between them changes during the course of treatment and may vary among individuals. During treatments, some individuals may experience terminal events such as death. Because the terminal event may be related to the individual's viral load measurements, the terminal mechanism is non-ignorable. Furthermore, there exists competing risks from multiple types of events, such as AIDS-related death and other death. Most joint models for the analysis of longitudinal-survival data developed in literatures have focused on constant coefficients and assume symmetric distribution for the endpoints, which does not meet the needs for investigating the nature of varying relationship between HIV viral load and CD4 cell counts in practice. We develop a mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects to deal with varying relationship between the two endpoints for longitudinal-competing risks survival data. A fully Bayesian inference procedure is established to estimate parameters in the joint model. The proposed method is applied to a multicenter AIDS cohort study. Various scenarios-based potential models that account for partial data features are compared. Some interesting findings are presented.

  13. Encounter success of free-ranging marine predator movements across a dynamic prey landscape.

    PubMed

    Sims, David W; Witt, Matthew J; Richardson, Anthony J; Southall, Emily J; Metcalfe, Julian D

    2006-05-22

    Movements of wide-ranging top predators can now be studied effectively using satellite and archival telemetry. However, the motivations underlying movements remain difficult to determine because trajectories are seldom related to key biological gradients, such as changing prey distributions. Here, we use a dynamic prey landscape of zooplankton biomass in the north-east Atlantic Ocean to examine active habitat selection in the plankton-feeding basking shark Cetorhinus maximus. The relative success of shark searches across this landscape was examined by comparing prey biomass encountered by sharks with encounters by random-walk simulations of 'model' sharks. Movements of transmitter-tagged sharks monitored for 964 days (16754 km estimated minimum distance) were concentrated on the European continental shelf in areas characterized by high seasonal productivity and complex prey distributions. We show movements by adult and sub-adult sharks yielded consistently higher prey encounter rates than 90% of random-walk simulations. Behavioural patterns were consistent with basking sharks using search tactics structured across multiple scales to exploit the richest prey areas available in preferred habitats. Simple behavioural rules based on learned responses to previously encountered prey distributions may explain the high performances. This study highlights how dynamic prey landscapes enable active habitat selection in large predators to be investigated from a trophic perspective, an approach that may inform conservation by identifying critical habitat of vulnerable species.

  14. Power Laws are Disguised Boltzmann Laws

    NASA Astrophysics Data System (ADS)

    Richmond, Peter; Solomon, Sorin

    Using a previously introduced model on generalized Lotka-Volterra dynamics together with some recent results for the solution of generalized Langevin equations, we derive analytically the equilibrium mean field solution for the probability distribution of wealth and show that it has two characteristic regimes. For large values of wealth, it takes the form of a Pareto style power law. For small values of wealth, w<=wm, the distribution function tends sharply to zero. The origin of this law lies in the random multiplicative process built into the model. Whilst such results have been known since the time of Gibrat, the present framework allows for a stable power law in an arbitrary and irregular global dynamics, so long as the market is ``fair'', i.e., there is no net advantage to any particular group or individual. We further show that the dynamics of relative wealth is independent of the specific nature of the agent interactions and exhibits a universal character even though the total wealth may follow an arbitrary and complicated dynamics. In developing the theory, we draw parallels with conventional thermodynamics and derive for the system some new relations for the ``thermodynamics'' associated with the Generalized Lotka-Volterra type of stochastic dynamics. The power law that arises in the distribution function is identified with new additional logarithmic terms in the familiar Boltzmann distribution function for the system. These are a direct consequence of the multiplicative stochastic dynamics and are absent for the usual additive stochastic processes.

  15. Transition in the waiting-time distribution of price-change events in a global socioeconomic system

    NASA Astrophysics Data System (ADS)

    Zhao, Guannan; McDonald, Mark; Fenn, Dan; Williams, Stacy; Johnson, Nicholas; Johnson, Neil F.

    2013-12-01

    The goal of developing a firmer theoretical understanding of inhomogeneous temporal processes-in particular, the waiting times in some collective dynamical system-is attracting significant interest among physicists. Quantifying the deviations between the waiting-time distribution and the distribution generated by a random process may help unravel the feedback mechanisms that drive the underlying dynamics. We analyze the waiting-time distributions of high-frequency foreign exchange data for the best executable bid-ask prices across all major currencies. We find that the lognormal distribution yields a good overall fit for the waiting-time distribution between currency rate changes if both short and long waiting times are included. If we restrict our study to long waiting times, each currency pair’s distribution is consistent with a power-law tail with exponent near to 3.5. However, for short waiting times, the overall distribution resembles one generated by an archetypal complex systems model in which boundedly rational agents compete for limited resources. Our findings suggest that a gradual transition arises in trading behavior between a fast regime in which traders act in a boundedly rational way and a slower one in which traders’ decisions are driven by generic feedback mechanisms across multiple timescales and hence produce similar power-law tails irrespective of currency type.

  16. Concise biomarker for spatial-temporal change in three-dimensional ultrasound measurement of carotid vessel wall and plaque thickness based on a graph-based random walk framework: Towards sensitive evaluation of response to therapy.

    PubMed

    Chiu, Bernard; Chen, Weifu; Cheng, Jieyu

    2016-12-01

    Rapid progression in total plaque area and volume measured from ultrasound images has been shown to be associated with an elevated risk of cardiovascular events. Since atherosclerosis is focal and predominantly occurring at the bifurcation, biomarkers that are able to quantify the spatial distribution of vessel-wall-plus-plaque thickness (VWT) change may allow for more sensitive detection of treatment effect. The goal of this paper is to develop simple and sensitive biomarkers to quantify the responsiveness to therapies based on the spatial distribution of VWT-Change on the entire 2D carotid standardized map previously described. Point-wise VWT-Changes computed for each patient were reordered lexicographically to a high-dimensional data node in a graph. A graph-based random walk framework was applied with the novel Weighted Cosine (WCos) similarity function introduced, which was tailored for quantification of responsiveness to therapy. The converging probability of each data node to the VWT regression template in the random walk process served as a scalar descriptor for VWT responsiveness to treatment. The WCos-based biomarker was 14 times more sensitive than the mean VWT-Change in discriminating responsive and unresponsive subjects based on the p-values obtained in T-tests. The proposed framework was extended to quantify where VWT-Change occurred by including multiple VWT-Change distribution templates representing focal changes at different regions. Experimental results show that the framework was effective in classifying carotid arteries with focal VWT-Change at different locations and may facilitate future investigations to correlate risk of cardiovascular events with the location where focal VWT-Change occurs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Radiative transfer theory for a random distribution of low velocity spheres as resonant isotropic scatterers

    NASA Astrophysics Data System (ADS)

    Sato, Haruo; Hayakawa, Toshihiko

    2014-10-01

    Short-period seismograms of earthquakes are complex especially beneath volcanoes, where the S wave mean free path is short and low velocity bodies composed of melt or fluid are expected in addition to random velocity inhomogeneities as scattering sources. Resonant scattering inherent in a low velocity body shows trap and release of waves with a delay time. Focusing of the delay time phenomenon, we have to consider seriously multiple resonant scattering processes. Since wave phases are complex in such a scattering medium, the radiative transfer theory has been often used to synthesize the variation of mean square (MS) amplitude of waves; however, resonant scattering has not been well adopted in the conventional radiative transfer theory. Here, as a simple mathematical model, we study the sequence of isotropic resonant scattering of a scalar wavelet by low velocity spheres at low frequencies, where the inside velocity is supposed to be low enough. We first derive the total scattering cross-section per time for each order of scattering as the convolution kernel representing the decaying scattering response. Then, for a random and uniform distribution of such identical resonant isotropic scatterers, we build the propagator of the MS amplitude by using causality, a geometrical spreading factor and the scattering loss. Using those propagators and convolution kernels, we formulate the radiative transfer equation for a spherically impulsive radiation from a point source. The synthesized MS amplitude time trace shows a dip just after the direct arrival and a delayed swelling, and then a decaying tail at large lapse times. The delayed swelling is a prominent effect of resonant scattering. The space distribution of synthesized MS amplitude shows a swelling near the source region in space, and it becomes a bell shape like a diffusion solution at large lapse times.

  18. Application of randomly oriented spheroids for retrieval of dust particle parameters from multiwavelength lidar measurements

    NASA Astrophysics Data System (ADS)

    Veselovskii, I.; Dubovik, O.; Kolgotin, A.; Lapyonok, T.; di Girolamo, P.; Summa, D.; Whiteman, D. N.; Mishchenko, M.; Tanré, D.

    2010-11-01

    Multiwavelength (MW) Raman lidars have demonstrated their potential to profile particle parameters; however, until now, the physical models used in retrieval algorithms for processing MW lidar data have been predominantly based on the Mie theory. This approach is applicable to the modeling of light scattering by spherically symmetric particles only and does not adequately reproduce the scattering by generally nonspherical desert dust particles. Here we present an algorithm based on a model of randomly oriented spheroids for the inversion of multiwavelength lidar data. The aerosols are modeled as a mixture of two aerosol components: one composed only of spherical and the second composed of nonspherical particles. The nonspherical component is an ensemble of randomly oriented spheroids with size-independent shape distribution. This approach has been integrated into an algorithm retrieving aerosol properties from the observations with a Raman lidar based on a tripled Nd:YAG laser. Such a lidar provides three backscattering coefficients, two extinction coefficients, and the particle depolarization ratio at a single or multiple wavelengths. Simulations were performed for a bimodal particle size distribution typical of desert dust particles. The uncertainty of the retrieved particle surface, volume concentration, and effective radius for 10% measurement errors is estimated to be below 30%. We show that if the effect of particle nonsphericity is not accounted for, the errors in the retrieved aerosol parameters increase notably. The algorithm was tested with experimental data from a Saharan dust outbreak episode, measured with the BASIL multiwavelength Raman lidar in August 2007. The vertical profiles of particle parameters as well as the particle size distributions at different heights were retrieved. It was shown that the algorithm developed provided substantially reasonable results consistent with the available independent information about the observed aerosol event.

  19. Radiance and polarization of multiple scattered light from haze and clouds.

    PubMed

    Kattawar, G W; Plass, G N

    1968-08-01

    The radiance and polarization of multiple scattered light is calculated from the Stokes' vectors by a Monte Carlo method. The exact scattering matrix for a typical haze and for a cloud whose spherical drops have an average radius of 12 mu is calculated from the Mie theory. The Stokes' vector is transformed in a collision by this scattering matrix and the rotation matrix. The two angles that define the photon direction after scattering are chosen by a random process that correctly simulates the actual distribution functions for both angles. The Monte Carlo results for Rayleigh scattering compare favorably with well known tabulated results. Curves are given of the reflected and transmitted radiances and polarizations for both the haze and cloud models and for several solar angles, optical thicknesses, and surface albedos. The dependence on these various parameters is discussed.

  20. Impurity Induced Phase Competition and Supersolidity

    NASA Astrophysics Data System (ADS)

    Karmakar, Madhuparna; Ganesh, R.

    2017-12-01

    Several material families show competition between superconductivity and other orders. When such competition is driven by doping, it invariably involves spatial inhomogeneities which can seed competing orders. We study impurity-induced charge order in the attractive Hubbard model, a prototypical model for competition between superconductivity and charge density wave order. We show that a single impurity induces a charge-ordered texture over a length scale set by the energy cost of the competing phase. Our results are consistent with a strong-coupling field theory proposed earlier in which superconducting and charge order parameters form components of an SO(3) vector field. To discuss the effects of multiple impurities, we focus on two cases: correlated and random distributions. In the correlated case, the CDW puddles around each impurity overlap coherently leading to a "supersolid" phase with coexisting pairing and charge order. In contrast, a random distribution of impurities does not lead to coherent CDW formation. We argue that the energy lowering from coherent ordering can have a feedback effect, driving correlations between impurities. This can be understood as arising from an RKKY-like interaction, mediated by impurity textures. We discuss implications for charge order in the cuprates and doped CDW materials such as NbSe2.

  1. Stochastic space interval as a link between quantum randomness and macroscopic randomness?

    NASA Astrophysics Data System (ADS)

    Haug, Espen Gaarder; Hoff, Harald

    2018-03-01

    For many stochastic phenomena, we observe statistical distributions that have fat-tails and high-peaks compared to the Gaussian distribution. In this paper, we will explain how observable statistical distributions in the macroscopic world could be related to the randomness in the subatomic world. We show that fat-tailed (leptokurtic) phenomena in our everyday macroscopic world are ultimately rooted in Gaussian - or very close to Gaussian-distributed subatomic particle randomness, but they are not, in a strict sense, Gaussian distributions. By running a truly random experiment over a three and a half-year period, we observed a type of random behavior in trillions of photons. Combining our results with simple logic, we find that fat-tailed and high-peaked statistical distributions are exactly what we would expect to observe if the subatomic world is quantized and not continuously divisible. We extend our analysis to the fact that one typically observes fat-tails and high-peaks relative to the Gaussian distribution in stocks and commodity prices and many aspects of the natural world; these instances are all observable and documentable macro phenomena that strongly suggest that the ultimate building blocks of nature are discrete (e.g. they appear in quanta).

  2. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, W; Farr, J

    2015-06-15

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MCmore » simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations.« less

  3. Distributed Detection with Collisions in a Random, Single-Hop Wireless Sensor Network

    DTIC Science & Technology

    2013-05-26

    public release; distribution is unlimited. Distributed detection with collisions in a random, single-hop wireless sensor network The views, opinions...1274 2 ABSTRACT Distributed detection with collisions in a random, single-hop wireless sensor network Report Title We consider the problem of... WIRELESS SENSOR NETWORK Gene T. Whipps?† Emre Ertin† Randolph L. Moses† ?U.S. Army Research Laboratory, Adelphi, MD 20783 †The Ohio State University

  4. Stochastic scheduling on a repairable manufacturing system

    NASA Astrophysics Data System (ADS)

    Li, Wei; Cao, Jinhua

    1995-08-01

    In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.

  5. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  6. Bridging Three Orders of Magnitude: Multiple Scattered Waves Sense Fractal Microscopic Structures via Dispersion

    NASA Astrophysics Data System (ADS)

    Lambert, Simon A.; Näsholm, Sven Peter; Nordsletten, David; Michler, Christian; Juge, Lauriane; Serfaty, Jean-Michel; Bilston, Lynne; Guzina, Bojan; Holm, Sverre; Sinkus, Ralph

    2015-08-01

    Wave scattering provides profound insight into the structure of matter. Typically, the ability to sense microstructure is determined by the ratio of scatterer size to probing wavelength. Here, we address the question of whether macroscopic waves can report back the presence and distribution of microscopic scatterers despite several orders of magnitude difference in scale between wavelength and scatterer size. In our analysis, monosized hard scatterers 5 μ m in radius are immersed in lossless gelatin phantoms to investigate the effect of multiple reflections on the propagation of shear waves with millimeter wavelength. Steady-state monochromatic waves are imaged in situ via magnetic resonance imaging, enabling quantification of the phase velocity at a voxel size big enough to contain thousands of individual scatterers, but small enough to resolve the wavelength. We show in theory, experiments, and simulations that the resulting coherent superposition of multiple reflections gives rise to power-law dispersion at the macroscopic scale if the scatterer distribution exhibits apparent fractality over an effective length scale that is comparable to the probing wavelength. Since apparent fractality is naturally present in any random medium, microstructure can thereby leave its fingerprint on the macroscopically quantifiable power-law exponent. Our results are generic to wave phenomena and carry great potential for sensing microstructure that exhibits intrinsic fractality, such as, for instance, vasculature.

  7. A Wave Chaotic Study of Quantum Graphs with Microwave Networks

    NASA Astrophysics Data System (ADS)

    Fu, Ziyuan

    Quantum graphs provide a setting to test the hypothesis that all ray-chaotic systems show universal wave chaotic properties. I study the quantum graphs with a wave chaotic approach. Here, an experimental setup consisting of a microwave coaxial cable network is used to simulate quantum graphs. Some basic features and the distributions of impedance statistics are analyzed from experimental data on an ensemble of tetrahedral networks. The random coupling model (RCM) is applied in an attempt to uncover the universal statistical properties of the system. Deviations from RCM predictions have been observed in that the statistics of diagonal and off-diagonal impedance elements are different. Waves trapped due to multiple reflections on bonds between nodes in the graph most likely cause the deviations from universal behavior in the finite-size realization of a quantum graph. In addition, I have done some investigations on the Random Coupling Model, which are useful for further research.

  8. Active illuminated space object imaging and tracking simulation

    NASA Astrophysics Data System (ADS)

    Yue, Yufang; Xie, Xiaogang; Luo, Wen; Zhang, Feizhou; An, Jianzhu

    2016-10-01

    Optical earth imaging simulation of a space target in orbit and it's extraction in laser illumination condition were discussed. Based on the orbit and corresponding attitude of a satellite, its 3D imaging rendering was built. General simulation platform was researched, which was adaptive to variable 3D satellite models and relative position relationships between satellite and earth detector system. Unified parallel projection technology was proposed in this paper. Furthermore, we denoted that random optical distribution in laser-illuminated condition was a challenge for object discrimination. Great randomicity of laser active illuminating speckles was the primary factor. The conjunction effects of multi-frame accumulation process and some tracking methods such as Meanshift tracking, contour poid, and filter deconvolution were simulated. Comparison of results illustrates that the union of multi-frame accumulation and contour poid was recommendable for laser active illuminated images, which had capacities of high tracking precise and stability for multiple object attitudes.

  9. Probabilistic track coverage in cooperative sensor networks.

    PubMed

    Ferrari, Silvia; Zhang, Guoxian; Wettergren, Thomas A

    2010-12-01

    The quality of service of a network performing cooperative track detection is represented by the probability of obtaining multiple elementary detections over time along a target track. Recently, two different lines of research, namely, distributed-search theory and geometric transversals, have been used in the literature for deriving the probability of track detection as a function of random and deterministic sensors' positions, respectively. In this paper, we prove that these two approaches are equivalent under the same problem formulation. Also, we present a new performance function that is derived by extending the geometric-transversal approach to the case of random sensors' positions using Poisson flats. As a result, a unified approach for addressing track detection in both deterministic and probabilistic sensor networks is obtained. The new performance function is validated through numerical simulations and is shown to bring about considerable computational savings for both deterministic and probabilistic sensor networks.

  10. Time-resolved dynamics of granular matter by random laser emission

    NASA Astrophysics Data System (ADS)

    Folli, Viola; Ghofraniha, Neda; Puglisi, Andrea; Leuzzi, Luca; Conti, Claudio

    2013-07-01

    Because of the huge commercial importance of granular systems, the second-most used material in industry after water, intersecting the industry in multiple trades, like pharmacy and agriculture, fundamental research on grain-like materials has received an increasing amount of attention in the last decades. In photonics, the applications of granular materials have been only marginally investigated. We report the first phase-diagram of a granular as obtained by laser emission. The dynamics of vertically-oscillated granular in a liquid solution in a three-dimensional container is investigated by employing its random laser emission. The granular motion is function of the frequency and amplitude of the mechanical solicitation, we show how the laser emission allows to distinguish two phases in the granular and analyze its spectral distribution. This constitutes a fundamental step in the field of granulars and gives a clear evidence of the possible control on light-matter interaction achievable in grain-like system.

  11. Estimation of treatment effect in a subpopulation: An empirical Bayes approach.

    PubMed

    Shen, Changyu; Li, Xiaochun; Jeong, Jaesik

    2016-01-01

    It is well recognized that the benefit of a medical intervention may not be distributed evenly in the target population due to patient heterogeneity, and conclusions based on conventional randomized clinical trials may not apply to every person. Given the increasing cost of randomized trials and difficulties in recruiting patients, there is a strong need to develop analytical approaches to estimate treatment effect in subpopulations. In particular, due to limited sample size for subpopulations and the need for multiple comparisons, standard analysis tends to yield wide confidence intervals of the treatment effect that are often noninformative. We propose an empirical Bayes approach to combine both information embedded in a target subpopulation and information from other subjects to construct confidence intervals of the treatment effect. The method is appealing in its simplicity and tangibility in characterizing the uncertainty about the true treatment effect. Simulation studies and a real data analysis are presented.

  12. Trial of Minocycline in a Clinically Isolated Syndrome of Multiple Sclerosis.

    PubMed

    Metz, Luanne M; Li, David K B; Traboulsee, Anthony L; Duquette, Pierre; Eliasziw, Misha; Cerchiaro, Graziela; Greenfield, Jamie; Riddehough, Andrew; Yeung, Michael; Kremenchutzky, Marcelo; Vorobeychik, Galina; Freedman, Mark S; Bhan, Virender; Blevins, Gregg; Marriott, James J; Grand'Maison, Francois; Lee, Liesly; Thibault, Manon; Hill, Michael D; Yong, V Wee

    2017-06-01

    On the basis of encouraging preliminary results, we conducted a randomized, controlled trial to determine whether minocycline reduces the risk of conversion from a first demyelinating event (also known as a clinically isolated syndrome) to multiple sclerosis. During the period from January 2009 through July 2013, we randomly assigned participants who had had their first demyelinating symptoms within the previous 180 days to receive either 100 mg of minocycline, administered orally twice daily, or placebo. Administration of minocycline or placebo was continued until a diagnosis of multiple sclerosis was established or until 24 months after randomization, whichever came first. The primary outcome was conversion to multiple sclerosis (diagnosed on the basis of the 2005 McDonald criteria) within 6 months after randomization. Secondary outcomes included conversion to multiple sclerosis within 24 months after randomization and changes on magnetic resonance imaging (MRI) at 6 months and 24 months (change in lesion volume on T 2 -weighted MRI, cumulative number of new lesions enhanced on T 1 -weighted MRI ["enhancing lesions"], and cumulative combined number of unique lesions [new enhancing lesions on T 1 -weighted MRI plus new and newly enlarged lesions on T 2 -weighted MRI]). A total of 142 eligible participants underwent randomization at 12 Canadian multiple sclerosis clinics; 72 participants were assigned to the minocycline group and 70 to the placebo group. The mean age of the participants was 35.8 years, and 68.3% were women. The unadjusted risk of conversion to multiple sclerosis within 6 months after randomization was 61.0% in the placebo group and 33.4% in the minocycline group, a difference of 27.6 percentage points (95% confidence interval [CI], 11.4 to 43.9; P=0.001). After adjustment for the number of enhancing lesions at baseline, the difference in the risk of conversion to multiple sclerosis within 6 months after randomization was 18.5 percentage points (95% CI, 3.7 to 33.3; P=0.01); the unadjusted risk difference was not significant at the 24-month secondary outcome time point (P=0.06). All secondary MRI outcomes favored minocycline over placebo at 6 months but not at 24 months. Trial withdrawals and adverse events of rash, dizziness, and dental discoloration were more frequent among participants who received minocycline than among those who received placebo. The risk of conversion from a clinically isolated syndrome to multiple sclerosis was significantly lower with minocycline than with placebo over 6 months but not over 24 months. (Funded by the Multiple Sclerosis Society of Canada; ClinicalTrials.gov number, NCT00666887 .).

  13. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  14. Empirical Reference Distributions for Networks of Different Size

    PubMed Central

    Smith, Anna; Calder, Catherine A.; Browning, Christopher R.

    2016-01-01

    Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556

  15. Detecting anomalies in CMB maps: a new method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neelakanta, Jayanth T., E-mail: jayanthtn@gmail.com

    2015-10-01

    Ever since WMAP announced its first results, different analyses have shown that there is weak evidence for several large-scale anomalies in the CMB data. While the evidence for each anomaly appears to be weak, the fact that there are multiple seemingly unrelated anomalies makes it difficult to account for them via a single statistical fluke. So, one is led to considering a combination of these anomalies. But, if we ''hand-pick'' the anomalies (test statistics) to consider, we are making an a posteriori choice. In this article, we propose two statistics that do not suffer from this problem. The statistics aremore » linear and quadratic combinations of the a{sub ℓ m}'s with random co-efficients, and they test the null hypothesis that the a{sub ℓ m}'s are independent, normally-distributed, zero-mean random variables with an m-independent variance. The motivation for considering multiple modes is this: because most physical models that lead to large-scale anomalies result in coupling multiple ℓ and m modes, the ''coherence'' of this coupling should get enhanced if a combination of different modes is considered. In this sense, the statistics are thus much more generic than those that have been hitherto considered in literature. Using fiducial data, we demonstrate that the method works and discuss how it can be used with actual CMB data to make quite general statements about the incompatibility of the data with the null hypothesis.« less

  16. Contextual Interference in Complex Bimanual Skill Learning Leads to Better Skill Persistence

    PubMed Central

    Pauwels, Lisa; Swinnen, Stephan P.; Beets, Iseult A. M.

    2014-01-01

    The contextual interference (CI) effect is a robust phenomenon in the (motor) skill learning literature. However, CI has yielded mixed results in complex task learning. The current study addressed whether the CI effect is generalizable to bimanual skill learning, with a focus on the temporal evolution of memory processes. In contrast to previous studies, an extensive training schedule, distributed across multiple days of practice, was provided. Participants practiced three frequency ratios across three practice days following either a blocked or random practice schedule. During the acquisition phase, better overall performance for the blocked practice group was observed, but this difference diminished as practice progressed. At immediate and delayed retention, the random practice group outperformed the blocked practice group, except for the most difficult frequency ratio. Our main finding is that the random practice group showed superior performance persistence over a one week time interval in all three frequency ratios compared to the blocked practice group. This study contributes to our understanding of learning, consolidation and memory of complex motor skills, which helps optimizing training protocols in future studies and rehabilitation settings. PMID:24960171

  17. Modeling complex systems in the geosciences

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2013-03-01

    Many geophysical phenomena can be described as complex systems, involving phenomena such as extreme or "wild" events that often do not follow the Gaussian distribution that would be expected if the events were simply random and uncorrelated. For instance, some geophysical phenomena like earthquakes show a much higher occurrence of relatively large values than would a Gaussian distribution and so are examples of the "Noah effect" (named by Benoit Mandelbrot for the exceptionally heavy rain in the biblical flood). Other geophysical phenomena are examples of the "Joseph effect," in which a state is especially persistent, such as a spell of multiple consecutive hot days (heat waves) or several dry summers in a row. The Joseph effect was named after the biblical story in which Joseph's dream of seven fat cows and seven thin ones predicted 7 years of plenty followed by 7 years of drought.

  18. Passage relevance models for genomics search.

    PubMed

    Urbain, Jay; Frieder, Ophir; Goharian, Nazli

    2009-03-19

    We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  19. Gravitational Effects on Closed-Cellular-Foam Microstructure

    NASA Technical Reports Server (NTRS)

    Noever, David A.; Cronise, Raymond J.; Wessling, Francis C.; McMannus, Samuel P.; Mathews, John; Patel, Darayas

    1996-01-01

    Polyurethane foam has been produced in low gravity for the first time. The cause and distribution of different void or pore sizes are elucidated from direct comparison of unit-gravity and low-gravity samples. Low gravity is found to increase the pore roundness by 17% and reduce the void size by 50%. The standard deviation for pores becomes narrower (a more homogeneous foam is produced) in low gravity. Both a Gaussian and a Weibull model fail to describe the statistical distribution of void areas, and hence the governing dynamics do not combine small voids in either a uniform or a dependent fashion to make larger voids. Instead, the void areas follow an exponential law, which effectively randomizes the production of void sizes in a nondependent fashion consistent more with single nucleation than with multiple or combining events.

  20. A statistical analysis of product prices in online markets

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; Watanabe, T.

    2010-08-01

    We empirically investigate fluctuations in product prices in online markets by using a tick-by-tick price data collected from a Japanese price comparison site, and find some similarities and differences between product and asset prices. The average price of a product across e-retailers behaves almost like a random walk, although the probability of price increase/decrease is higher conditional on the multiple events of price increase/decrease. This is quite similar to the property reported by previous studies about asset prices. However, we fail to find a long memory property in the volatility of product price changes. Also, we find that the price change distribution for product prices is close to an exponential distribution, rather than a power law distribution. These two findings are in a sharp contrast with the previous results regarding asset prices. We propose an interpretation that these differences may stem from the absence of speculative activities in product markets; namely, e-retailers seldom repeat buy and sell of a product, unlike traders in asset markets.

  1. Mapping the distribution of the main host for plague in a complex landscape in Kazakhstan: An object-based approach using SPOT-5 XS, Landsat 7 ETM+, SRTM and multiple Random Forests

    NASA Astrophysics Data System (ADS)

    Wilschut, L. I.; Addink, E. A.; Heesterbeek, J. A. P.; Dubyanskiy, V. M.; Davis, S. A.; Laudisoit, A.; Begon, M.; Burdelov, L. A.; Atshabar, B. B.; de Jong, S. M.

    2013-08-01

    Plague is a zoonotic infectious disease present in great gerbil populations in Kazakhstan. Infectious disease dynamics are influenced by the spatial distribution of the carriers (hosts) of the disease. The great gerbil, the main host in our study area, lives in burrows, which can be recognized on high resolution satellite imagery. In this study, using earth observation data at various spatial scales, we map the spatial distribution of burrows in a semi-desert landscape. The study area consists of various landscape types. To evaluate whether identification of burrows by classification is possible in these landscape types, the study area was subdivided into eight landscape units, on the basis of Landsat 7 ETM+ derived Tasselled Cap Greenness and Brightness, and SRTM derived standard deviation in elevation. In the field, 904 burrows were mapped. Using two segmented 2.5 m resolution SPOT-5 XS satellite scenes, reference object sets were created. Random Forests were built for both SPOT scenes and used to classify the images. Additionally, a stratified classification was carried out, by building separate Random Forests per landscape unit. Burrows were successfully classified in all landscape units. In the ‘steppe on floodplain’ areas, classification worked best: producer's and user's accuracy in those areas reached 88% and 100%, respectively. In the ‘floodplain’ areas with a more heterogeneous vegetation cover, classification worked least well; there, accuracies were 86 and 58% respectively. Stratified classification improved the results in all landscape units where comparison was possible (four), increasing kappa coefficients by 13, 10, 9 and 1%, respectively. In this study, an innovative stratification method using high- and medium resolution imagery was applied in order to map host distribution on a large spatial scale. The burrow maps we developed will help to detect changes in the distribution of great gerbil populations and, moreover, serve as a unique empirical data set which can be used as input for epidemiological plague models. This is an important step in understanding the dynamics of plague.

  2. Probability distribution of the entanglement across a cut at an infinite-randomness fixed point

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Majumdar, Satya N.; Huse, David A.

    2017-03-01

    We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.

  3. A scaling law for random walks on networks

    PubMed Central

    Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-01-01

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics. PMID:25311870

  4. Partial transpose of random quantum states: Exact formulas and meanders

    NASA Astrophysics Data System (ADS)

    Fukuda, Motohisa; Śniady, Piotr

    2013-04-01

    We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.

  5. A scaling law for random walks on networks

    NASA Astrophysics Data System (ADS)

    Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-10-01

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.

  6. A scaling law for random walks on networks.

    PubMed

    Perkins, Theodore J; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-10-14

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.

  7. Parametric and non-parametric masking of randomness in sequence alignments can be improved and leads to better resolved trees.

    PubMed

    Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard

    2010-03-31

    Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.

  8. Explicit equilibria in a kinetic model of gambling

    NASA Astrophysics Data System (ADS)

    Bassetti, F.; Toscani, G.

    2010-06-01

    We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.

  9. Narrow-band generation in random distributed feedback fiber laser.

    PubMed

    Sugavanam, Srikanth; Tarasov, Nikita; Shu, Xuewen; Churkin, Dmitry V

    2013-07-15

    Narrow-band emission of spectral width down to ~0.05 nm line-width is achieved in the random distributed feedback fiber laser employing narrow-band fiber Bragg grating or fiber Fabry-Perot interferometer filters. The observed line-width is ~10 times less than line-width of other demonstrated up to date random distributed feedback fiber lasers. The random DFB laser with Fabry-Perot interferometer filter provides simultaneously multi-wavelength and narrow-band (within each line) generation with possibility of further wavelength tuning.

  10. Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, Chen; Sichitiu, Mihail L.

    Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.

  11. Structural diversity effects of multilayer networks on the threshold of interacting epidemics

    NASA Astrophysics Data System (ADS)

    Wang, Weihong; Chen, MingMing; Min, Yong; Jin, Xiaogang

    2016-02-01

    Foodborne diseases always spread through multiple vectors (e.g. fresh vegetables and fruits) and reveal that multilayer network could spread fatal pathogen with complex interactions. In this paper, first, we use a "top-down analysis framework that depends on only two distributions to describe a random multilayer network with any number of layers. These two distributions are the overlaid degree distribution and the edge-type distribution of the multilayer network. Second, based on the two distributions, we adopt three indicators of multilayer network diversity to measure the correlation between network layers, including network richness, likeness, and evenness. The network richness is the number of layers forming the multilayer network. The network likeness is the degree of different layers sharing the same edge. The network evenness is the variance of the number of edges in every layer. Third, based on a simple epidemic model, we analyze the influence of network diversity on the threshold of interacting epidemics with the coexistence of collaboration and competition. Our work extends the "top-down" analysis framework to deal with the more complex epidemic situation and more diversity indicators and quantifies the trade-off between thresholds of inter-layer collaboration and intra-layer transmission.

  12. Generic emergence of power law distributions and Lévy-Stable intermittent fluctuations in discrete logistic systems

    NASA Astrophysics Data System (ADS)

    Biham, Ofer; Malcai, Ofer; Levy, Moshe; Solomon, Sorin

    1998-08-01

    The dynamics of generic stochastic Lotka-Volterra (discrete logistic) systems of the form wi(t+1)=λ(t)wi(t)+aw¯(t)-bwi(t)w¯(t) is studied by computer simulations. The variables wi, i=1,...,N, are the individual system components and w¯(t)=(1/N)∑iwi(t) is their average. The parameters a and b are constants, while λ(t) is randomly chosen at each time step from a given distribution. Models of this type describe the temporal evolution of a large variety of systems such as stock markets and city populations. These systems are characterized by a large number of interacting objects and the dynamics is dominated by multiplicative processes. The instantaneous probability distribution P(w,t) of the system components wi turns out to fulfill a Pareto power law P(w,t)~w-1-α. The time evolution of w¯(t) presents intermittent fluctuations parametrized by a Lévy-stable distribution with the same index α, showing an intricate relation between the distribution of the wi's at a given time and the temporal fluctuations of their average.

  13. The choice of prior distribution for a covariance matrix in multivariate meta-analysis: a simulation study.

    PubMed

    Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L

    2015-12-30

    Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  14. IS THE SUICIDE RATE A RANDOM WALK?

    PubMed

    Yang, Bijou; Lester, David; Lyke, Jennifer; Olsen, Robert

    2015-06-01

    The yearly suicide rates for the period 1933-2010 and the daily suicide numbers for 1990 and 1991 were examined for whether the distribution of difference scores (from year to year and from day to day) fitted a normal distribution, a characteristic of stochastic processes that follow a random walk. If the suicide rate were a random walk, then any disturbance to the suicide rate would have a permanent effect and national suicide prevention efforts would likely fail. The distribution of difference scores from day to day (but not the difference scores from year to year) fitted a normal distribution and, therefore, were consistent with a random walk.

  15. Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Cudeck, Robert

    2009-01-01

    A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…

  16. Multiple imputation in the presence of non-normal data.

    PubMed

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Smoking Cessation and Reduction in Schizophrenia (SCARIS) with e-cigarette: study protocol for a randomized control trial

    PubMed Central

    2014-01-01

    Background It is well established in studies across several countries that tobacco smoking is more prevalent among schizophrenic patients than the general population. Electronic cigarettes are becoming increasingly popular with smokers worldwide. To date there are no large randomized trials of electronic cigarettes in schizophrenic smokers. A well-designed trial is needed to compare efficacy and safety of these products in this special population. Methods/Design Intervention: We have designed a randomized controlled trial investigating the efficacy and safety of electronic cigarette. The trial will take the form of a prospective 12-month randomized clinical study to evaluate smoking reduction, smoking abstinence and adverse events in schizophrenic smokers not intending to quit. We will also monitor quality of life, neurocognitive functioning and measure participants’ perception and satisfaction of the product. Outcome measures: A ≥50% reduction in the number of cigarettes/day from baseline, will be calculated at each study visit (“reducers”). Abstinence from smoking will be calculated at each study visit (“quitters”). Smokers who leave the study protocol before its completion and will carry out the Early Termination Visit or who will not satisfy the criteria of “reducers” and “quitters” will be defined “non responders”. Statistical analysis: The differences of continuous variables between the three groups will be evaluated with the Kruskal-Wallis Test, followed by the Dunn multiple comparison test. The differences between the three groups for normally distributed data will be evaluated with ANOVA test one way, followed by the Newman-Keuls multiple comparison test. The normality of the distribution will be evaluated with the Kolmogorov-Smirnov test. Any correlations between the variables under evaluation will be assessed by Spearman r correlation. To compare qualitative data will be used the Chi-square test. Discussion The main strengths of the SCARIS study are the following: it’s the first large RCT on schizophrenic patient, involving in and outpatient, evaluating the effect of a three-arm study design, and a long term of follow-up (52-weeks). The goal is to propose an effective intervention to reduce the risk of tobacco smoking, as a complementary tool to treat tobacco addiction in schizophrenia. Trial registration ClinicalTrials.gov, NCT01979796. PMID:24655473

  18. Effect of texture randomization on the slip and interfacial robustness in turbulent flows over superhydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Seo, Jongmin; Mani, Ali

    2018-04-01

    Superhydrophobic surfaces demonstrate promising potential for skin friction reduction in naval and hydrodynamic applications. Recent developments of superhydrophobic surfaces aiming for scalable applications use random distribution of roughness, such as spray coating and etched process. However, most previous analyses of the interaction between flows and superhydrophobic surfaces studied periodic geometries that are economically feasible only in laboratory-scale experiments. In order to assess the drag reduction effectiveness as well as interfacial robustness of superhydrophobic surfaces with randomly distributed textures, we conduct direct numerical simulations of turbulent flows over randomly patterned interfaces considering a range of texture widths w+≈4 -26 , and solid fractions ϕs=11 %-25 % . Slip and no-slip boundary conditions are implemented in a pattern, modeling the presence of gas-liquid interfaces and solid elements. Our results indicate that slip of randomly distributed textures under turbulent flows is about 30 % less than those of surfaces with aligned features of the same size. In the small texture size limit w+≈4 , the slip length of the randomly distributed textures in turbulent flows is well described by a previously introduced Stokes flow solution of randomly distributed shear-free holes. By comparing DNS results for patterned slip and no-slip boundary against the corresponding homogenized slip length boundary conditions, we show that turbulent flows over randomly distributed posts can be represented by an isotropic slip length in streamwise and spanwise direction. The average pressure fluctuation on a gas pocket is similar to that of the aligned features with the same texture size and gas fraction, but the maximum interface deformation at the leading edge of the roughness element is about twice as large when the textures are randomly distributed. The presented analyses provide insights on implications of texture randomness on drag reduction performance and robustness of superhydrophobic surfaces.

  19. Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution

    ERIC Educational Resources Information Center

    Tong, Xin; Zhang, Zhiyong

    2012-01-01

    Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…

  20. A Validation Study of the Rank-Preserving Structural Failure Time Model: Confidence Intervals and Unique, Multiple, and Erroneous Solutions.

    PubMed

    Ouwens, Mario; Hauch, Ole; Franzén, Stefan

    2018-05-01

    The rank-preserving structural failure time model (RPSFTM) is used for health technology assessment submissions to adjust for switching patients from reference to investigational treatment in cancer trials. It uses counterfactual survival (survival when only reference treatment would have been used) and assumes that, at randomization, the counterfactual survival distribution for the investigational and reference arms is identical. Previous validation reports have assumed that patients in the investigational treatment arm stay on therapy throughout the study period. To evaluate the validity of the RPSFTM at various levels of crossover in situations in which patients are taken off the investigational drug in the investigational arm. The RPSFTM was applied to simulated datasets differing in percentage of patients switching, time of switching, underlying acceleration factor, and number of patients, using exponential distributions for the time on investigational and reference treatment. There were multiple scenarios in which two solutions were found: one corresponding to identical counterfactual distributions, and the other to two different crossing counterfactual distributions. The same was found for the hazard ratio (HR). Unique solutions were observed only when switching patients were on investigational treatment for <40% of the time that patients in the investigational arm were on treatment. Distributions other than exponential could have been used for time on treatment. An HR equal to 1 is a necessary but not always sufficient condition to indicate acceleration factors associated with equal counterfactual survival. Further assessment to distinguish crossing counterfactual curves from equal counterfactual curves is especially needed when the time that switchers stay on investigational treatment is relatively long compared to the time direct starters stay on investigational treatment.

  1. A Lego Mindstorms NXT based test bench for multiagent exploratory systems and distributed network partitioning

    NASA Astrophysics Data System (ADS)

    Patil, Riya Raghuvir

    Networks of communicating agents require distributed algorithms for a variety of tasks in the field of network analysis and control. For applications such as swarms of autonomous vehicles, ad hoc and wireless sensor networks, and such military and civilian applications as exploring and patrolling a robust autonomous system that uses a distributed algorithm for selfpartitioning can be significantly helpful. A single team of autonomous vehicles in a field may need to self-dissemble into multiple teams, conducive to completing multiple control tasks. Moreover, because communicating agents are subject to changes, namely, addition or failure of an agent or link, a distributed or decentralized algorithm is favorable over having a central agent. A framework to help with the study of self-partitioning of such multi agent systems that have most basic mobility model not only saves our time in conception but also gives us a cost effective prototype without negotiating the physical realization of the proposed idea. In this thesis I present my work on the implementation of a flexible and distributed stochastic partitioning algorithm on the LegoRTM Mindstorms' NXT on a graphical programming platform using National Instruments' LabVIEW(TM) forming a team of communicating agents via NXT-Bee radio module. We single out mobility, communication and self-partition as the core elements of the work. The goal is to randomly explore a precinct for reference sites. Agents who have discovered the reference sites announce their target acquisition to form a network formed based upon the distance of each agent with the other wherein the self-partitioning begins to find an optimal partition. Further, to illustrate the work, an experimental test-bench of five Lego NXT robots is presented.

  2. Integrating multiple distribution models to guide conservation efforts of an endangered toad

    USGS Publications Warehouse

    Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.

    2015-01-01

    Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.

  3. Vector nature of multi-soliton patterns in a passively mode-locked figure-eight fiber laser.

    PubMed

    Ning, Qiu-Yi; Liu, Hao; Zheng, Xu-Wu; Yu, Wei; Luo, Ai-Ping; Huang, Xu-Guang; Luo, Zhi-Chao; Xu, Wen-Cheng; Xu, Shan-Hui; Yang, Zhong-Min

    2014-05-19

    The vector nature of multi-soliton dynamic patterns was investigated in a passively mode-locked figure-eight fiber laser based on the nonlinear amplifying loop mirror (NALM). By properly adjusting the cavity parameters such as the pump power level and intra-cavity polarization controllers (PCs), in addition to the fundamental vector soliton, various vector multi-soliton regimes were observed, such as the random static distribution of vector multiple solitons, vector soliton cluster, vector soliton flow, and the state of vector multiple solitons occupying the whole cavity. Both the polarization-locked vector solitons (PLVSs) and the polarization-rotating vector solitons (PRVSs) were observed for fundamental soliton and each type of multi-soliton patterns. The obtained results further reveal the fundamental physics of multi-soliton patterns and demonstrate that the figure-eight fiber lasers are indeed a good platform for investigating the vector nature of different soliton types.

  4. Quantitative detection of multiple fluorophore sites as a tool for diagnosis and monitoring disease progression in salivary glands

    NASA Astrophysics Data System (ADS)

    Gannot, Israel; Bonner, Robert F.; Gannot, Gallya; Fox, Philip C.; You, Joon S.; Waynant, Ronald W.; Gandjbakhche, Amir H.

    1997-08-01

    A series of fluorescent surface images were obtained from physical models of localized fluorophores embedded at various depths and separations in tissue phantoms. Our random walk theory was applied to create an analytical model of multiple flurophores embedded in tissue-like phantom. Using this model, from acquired set of surface images, the location of the fluorophores was reconstructed and compared it to their known 3-D distributions. A good correlation was found, and the ability to resolve fluorophores as a function of depth and separation was determined. In parallel in in-vitro study, specific coloring of sections of minor salivary glands was also demonstrated. These results demonstrate the possibility of using inverse methods to reconstruct unknown locations and concentrations of optical probes specifically bound to infiltrating lymphocytes in minor salivary glands of patients with Sjogren's syndrome.

  5. Benford’s Law: Textbook Exercises and Multiple-Choice Testbanks

    PubMed Central

    Slepkov, Aaron D.; Ironside, Kevin B.; DiBattista, David

    2015-01-01

    Benford’s Law describes the finding that the distribution of leading (or leftmost) digits of innumerable datasets follows a well-defined logarithmic trend, rather than an intuitive uniformity. In practice this means that the most common leading digit is 1, with an expected frequency of 30.1%, and the least common is 9, with an expected frequency of 4.6%. Currently, the most common application of Benford’s Law is in detecting number invention and tampering such as found in accounting-, tax-, and voter-fraud. We demonstrate that answers to end-of-chapter exercises in physics and chemistry textbooks conform to Benford’s Law. Subsequently, we investigate whether this fact can be used to gain advantage over random guessing in multiple-choice tests, and find that while testbank answers in introductory physics closely conform to Benford’s Law, the testbank is nonetheless secure against such a Benford’s attack for banal reasons. PMID:25689468

  6. Inverse atmospheric radiative transfer problems - A nonlinear minimization search method of solution. [aerosol pollution monitoring

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.

    1976-01-01

    The paper studies the inversion of the radiative transfer equation describing the interaction of electromagnetic radiation with atmospheric aerosols. The interaction can be considered as the propagation in the aerosol medium of two light beams: the direct beam in the line-of-sight attenuated by absorption and scattering, and the diffuse beam arising from scattering into the viewing direction, which propagates more or less in random fashion. The latter beam has single scattering and multiple scattering contributions. In the former case and for single scattering, the problem is reducible to first-kind Fredholm equations, while for multiple scattering it is necessary to invert partial integrodifferential equations. A nonlinear minimization search method, applicable to the solution of both types of problems has been developed, and is applied here to the problem of monitoring aerosol pollution, namely the complex refractive index and size distribution of aerosol particles.

  7. Benford's Law: textbook exercises and multiple-choice testbanks.

    PubMed

    Slepkov, Aaron D; Ironside, Kevin B; DiBattista, David

    2015-01-01

    Benford's Law describes the finding that the distribution of leading (or leftmost) digits of innumerable datasets follows a well-defined logarithmic trend, rather than an intuitive uniformity. In practice this means that the most common leading digit is 1, with an expected frequency of 30.1%, and the least common is 9, with an expected frequency of 4.6%. Currently, the most common application of Benford's Law is in detecting number invention and tampering such as found in accounting-, tax-, and voter-fraud. We demonstrate that answers to end-of-chapter exercises in physics and chemistry textbooks conform to Benford's Law. Subsequently, we investigate whether this fact can be used to gain advantage over random guessing in multiple-choice tests, and find that while testbank answers in introductory physics closely conform to Benford's Law, the testbank is nonetheless secure against such a Benford's attack for banal reasons.

  8. A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps

    NASA Astrophysics Data System (ADS)

    Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.

    2017-06-01

    The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.

  9. Robust inference from multiple test statistics via permutations: a better alternative to the single test statistic approach for randomized trials.

    PubMed

    Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie

    2013-01-01

    Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.

  10. q-Gaussian distributions and multiplicative stochastic processes for analysis of multiple financial time series

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2010-12-01

    This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.

  11. Multiple-labelling immunoEM using different sizes of colloidal gold: alternative approaches to test for differential distribution and colocalization in subcellular structures.

    PubMed

    Mayhew, Terry M; Lucocq, John M

    2011-03-01

    Various methods for quantifying cellular immunogold labelling on transmission electron microscope thin sections are currently available. All rely on sound random sampling principles and are applicable to single immunolabelling across compartments within a given cell type or between different experimental groups of cells. Although methods are also available to test for colocalization in double/triple immunogold labelling studies, so far, these have relied on making multiple measurements of gold particle densities in defined areas or of inter-particle nearest neighbour distances. Here, we present alternative two-step approaches to codistribution and colocalization assessment that merely require raw counts of gold particles in distinct cellular compartments. For assessing codistribution over aggregate compartments, initial statistical evaluation involves combining contingency table and chi-squared analyses to provide predicted gold particle distributions. The observed and predicted distributions allow testing of the appropriate null hypothesis, namely, that there is no difference in the distribution patterns of proteins labelled by different sizes of gold particle. In short, the null hypothesis is that of colocalization. The approach for assessing colabelling recognises that, on thin sections, a compartment is made up of a set of sectional images (profiles) of cognate structures. The approach involves identifying two groups of compartmental profiles that are unlabelled and labelled for one gold marker size. The proportions in each group that are also labelled for the second gold marker size are then compared. Statistical analysis now uses a 2 × 2 contingency table combined with the Fisher exact probability test. Having identified double labelling, the profiles can be analysed further in order to identify characteristic features that might account for the double labelling. In each case, the approach is illustrated using synthetic and/or experimental datasets and can be refined to correct observed labelling patterns to specific labelling patterns. These simple and efficient approaches should be of more immediate utility to those interested in codistribution and colocalization in multiple immunogold labelling investigations.

  12. Micromechanical analysis of composites with fibers distributed randomly over the transverse cross-section

    NASA Astrophysics Data System (ADS)

    Weng, Jingmeng; Wen, Weidong; Cui, Haitao; Chen, Bo

    2018-06-01

    A new method to generate the random distribution of fibers in the transverse cross-section of fiber reinforced composites with high fiber volume fraction is presented in this paper. Based on the microscopy observation of the transverse cross-sections of unidirectional composite laminates, hexagon arrangement is set as the initial arrangement status, and the initial velocity of each fiber is arbitrary at an arbitrary direction, the micro-scale representative volume element (RVE) is established by simulating perfectly elastic collision. Combined with the proposed periodic boundary conditions which are suitable for multi-axial loading, the effective elastic properties of composite materials can be predicted. The predicted properties show reasonable agreement with experimental results. By comparing the stress field of RVE with fibers distributed randomly and RVE with fibers distributed periodically, the predicted elastic modulus of RVE with fibers distributed randomly is greater than RVE with fibers distributed periodically.

  13. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.

  14. Characterizing ISI and sub-threshold membrane potential distributions: Ensemble of IF neurons with random squared-noise intensity.

    PubMed

    Kumar, Sanjeev; Karmeshu

    2018-04-01

    A theoretical investigation is presented that characterizes the emerging sub-threshold membrane potential and inter-spike interval (ISI) distributions of an ensemble of IF neurons that group together and fire together. The squared-noise intensity σ 2 of the ensemble of neurons is treated as a random variable to account for the electrophysiological variations across population of nearly identical neurons. Employing superstatistical framework, both ISI distribution and sub-threshold membrane potential distribution of neuronal ensemble are obtained in terms of generalized K-distribution. The resulting distributions exhibit asymptotic behavior akin to stretched exponential family. Extensive simulations of the underlying SDE with random σ 2 are carried out. The results are found to be in excellent agreement with the analytical results. The analysis has been extended to cover the case corresponding to independent random fluctuations in drift in addition to random squared-noise intensity. The novelty of the proposed analytical investigation for the ensemble of IF neurons is that it yields closed form expressions of probability distributions in terms of generalized K-distribution. Based on a record of spiking activity of thousands of neurons, the findings of the proposed model are validated. The squared-noise intensity σ 2 of identified neurons from the data is found to follow gamma distribution. The proposed generalized K-distribution is found to be in excellent agreement with that of empirically obtained ISI distribution of neuronal ensemble. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Optimal partitioning of random programs across two processors

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  16. A distributed multichannel demand-adaptive P2P VoD system with optimized caching and neighbor-selection

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Chen, Minghua; Parekh, Abhay; Ramchandran, Kannan

    2011-09-01

    We design a distributed multi-channel P2P Video-on-Demand (VoD) system using "plug-and-play" helpers. Helpers are heterogenous "micro-servers" with limited storage, bandwidth and number of users they can serve simultaneously. Our proposed system has the following salient features: (1) it jointly optimizes over helper-user connection topology, video storage distribution and transmission bandwidth allocation; (2) it minimizes server load, and is adaptable to varying supply and demand patterns across multiple video channels irrespective of video popularity; and (3) it is fully distributed and requires little or no maintenance overhead. The combinatorial nature of the problem and the system demand for distributed algorithms makes the problem uniquely challenging. By utilizing Lagrangian decomposition and Markov chain approximation based arguments, we address this challenge by designing two distributed algorithms running in tandem: a primal-dual storage and bandwidth allocation algorithm and a "soft-worst-neighbor-choking" topology-building algorithm. Our scheme provably converges to a near-optimal solution, and is easy to implement in practice. Packet-level simulation results show that the proposed scheme achieves minimum sever load under highly heterogeneous combinations of supply and demand patterns, and is robust to system dynamics of user/helper churn, user/helper asynchrony, and random delays in the network.

  17. Work distributions for random sudden quantum quenches

    NASA Astrophysics Data System (ADS)

    Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter

    2017-05-01

    The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.

  18. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach

    PubMed Central

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  19. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach.

    PubMed

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447-2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8-30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics.

  20. Opinion formation and distribution in a bounded-confidence model on various networks

    NASA Astrophysics Data System (ADS)

    Meng, X. Flora; Van Gorder, Robert A.; Porter, Mason A.

    2018-02-01

    In the social, behavioral, and economic sciences, it is important to predict which individual opinions eventually dominate in a large population, whether there will be a consensus, and how long it takes for a consensus to form. Such ideas have been studied heavily both in physics and in other disciplines, and the answers depend strongly both on how one models opinions and on the network structure on which opinions evolve. One model that was created to study consensus formation quantitatively is the Deffuant model, in which the opinion distribution of a population evolves via sequential random pairwise encounters. To consider heterogeneity of interactions in a population along with social influence, we study the Deffuant model on various network structures (deterministic synthetic networks, random synthetic networks, and social networks constructed from Facebook data). We numerically simulate the Deffuant model and conduct regression analyses to investigate the dependence of the time to reach steady states on various model parameters, including a confidence bound for opinion updates, the number of participating entities, and their willingness to compromise. We find that network structure and parameter values both have important effects on the convergence time and the number of steady-state opinion groups. For some network architectures, we observe that the relationship between the convergence time and model parameters undergoes a transition at a critical value of the confidence bound. For some networks, the steady-state opinion distribution also changes from consensus to multiple opinion groups at this critical value.

  1. Multiple Two-Way Time Message Exchange (TTME) Time Synchronization for Bridge Monitoring Wireless Sensor Networks

    PubMed Central

    Shi, Fanrong; Tuo, Xianguo; Yang, Simon X.; Li, Huailiang; Shi, Rui

    2017-01-01

    Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring. PMID:28471418

  2. Multiple Two-Way Time Message Exchange (TTME) Time Synchronization for Bridge Monitoring Wireless Sensor Networks.

    PubMed

    Shi, Fanrong; Tuo, Xianguo; Yang, Simon X; Li, Huailiang; Shi, Rui

    2017-05-04

    Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring.

  3. A spatial analysis of health-related resources in three diverse metropolitan areas

    PubMed Central

    Smiley, Melissa J.; Diez Roux, Ana V.; Brines, Shannon J.; Brown, Daniel G.; Evenson, Kelly R.; Rodriguez, Daniel A.

    2010-01-01

    Few studies have investigated the spatial clustering of multiple health-related resources. We constructed 0.5-mile kernel densities of resources for census areas in New York City, NY (n=819 block groups), Baltimore, MD (n=737), and Winston-Salem, NC (n=169). Three of the four resource densities (supermarkets/produce stores, retail areas, and recreational facilities) tended to be correlated with each other, whereas park density was less consistently and sometimes negatively correlated with the others. Blacks were more likely to live in block groups with multiple low resource densities. Spatial regression models showed that block groups with higher proportions of black residents tended to have lower supermarket/produce, retail, and recreational facility densities, although these associations did not always achieve statistical significance. A measure that combined local and neighboring block group racial composition was often a stronger predictor of resources than the local measure alone. Overall, our results from three diverse U.S. cities show that health-related resources are not randomly distributed across space and that disadvantage in multiple domains often clusters with residential racial patterning. PMID:20478737

  4. Thermal effects on domain orientation of tetragonal piezoelectrics studied by in situ x-ray diffraction

    NASA Astrophysics Data System (ADS)

    Chang, Wonyoung; King, Alexander H.; Bowman, Keith J.

    2006-06-01

    Thermal effects on domain orientation in tetragonal lead zirconate titanate (PZT) and lead titanate (PT) have been investigated by using in situ x-ray diffraction with an area detector. In the case of a soft PZT, it is found that the texture parameter called multiples of a random distribution (MRD) initially increases with temperature up to approximately 100°C and then falls to unity at temperatures approaching the Curie temperature, whereas the MRD of hard PZT and PT initially undergoes a smaller increase or no change. The relationship between the mechanical strain energy and domain wall mobility with temperature is discussed.

  5. Raman dissipative soliton fiber laser pumped by an ASE source.

    PubMed

    Pan, Weiwei; Zhang, Lei; Zhou, Jiaqi; Yang, Xuezong; Feng, Yan

    2017-12-15

    The mode locking of a Raman fiber laser with an amplified spontaneous emission (ASE) pump source is investigated for performance improvement. Raman dissipative solitons with a compressed pulse duration of 1.05 ps at a repetition rate of 2.47 MHz are generated by utilizing nonlinear polarization rotation and all-fiber Lyot filter. A signal-to-noise ratio as high as 85 dB is measured in a radio-frequency spectrum, which suggests excellent temporal stability. Multiple-pulse operation with unique random static distribution is observed for the first time, to the best of our knowledge, at higher pump power in mode-locked Raman fiber lasers.

  6. Fission prompt gamma-ray multiplicity distribution measurements and simulations at DANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chyzh, A; Wu, C Y; Ullmann, J

    2010-08-24

    The nearly energy independence of the DANCE efficiency and multiplicity response to {gamma} rays makes it possible to measure the prompt {gamma}-ray multiplicity distribution in fission. We demonstrate this unique capability of DANCE through the comparison of {gamma}-ray energy and multiplicity distribution between the measurement and numerical simulation for three radioactive sources {sup 22}Na, {sup 60}Co, and {sup 88}Y. The prospect for measuring the {gamma}-ray multiplicity distribution for both spontaneous and neutron-induced fission is discussed.

  7. a Distributed Online 3D-LIDAR Mapping System

    NASA Astrophysics Data System (ADS)

    Schmiemann, J.; Harms, H.; Schattenberg, J.; Becker, M.; Batzdorfer, S.; Frerichs, L.

    2017-08-01

    In this paper we are presenting work done within the joint development project ANKommEn. It deals with the development of a highly automated robotic system for fast data acquisition in civil disaster scenarios. One of the main requirements is a versatile system, hence the concept embraces a machine cluster consisting of multiple fundamentally different robotic platforms. To cover a large variety of potential deployment scenarios, neither the absolute amount of participants, nor the precise individual layout of each platform shall be restricted within the conceptual design. Thus leading to a variety of special requirements, like onboard and online data processing capabilities for each individual participant and efficient data exchange structures, allowing reliable random data exchange between individual robots. We are demonstrating the functionality and performance by means of a distributed mapping system evaluated with real world data in a challenging urban and rural indoor/outdoor scenarios.

  8. Beyond Cassie equation: Local structure of heterogeneous surfaces determines the contact angles of microdroplets

    PubMed Central

    Zhang, Bo; Wang, Jianjun; Liu, Zhiping; Zhang, Xianren

    2014-01-01

    The application of Cassie equation to microscopic droplets is recently under intense debate because the microdroplet dimension is often of the same order of magnitude as the characteristic size of substrate heterogeneities, and the mechanism to describe the contact angle of microdroplets is not clear. By representing real surfaces statistically as an ensemble of patterned surfaces with randomly or regularly distributed heterogeneities (patches), lattice Boltzmann simulations here show that the contact angle of microdroplets has a wide distribution, either continuous or discrete, depending on the patch size. The origin of multiple contact angles observed is ascribed to the contact line pinning effect induced by substrate heterogeneities. We demonstrate that the local feature of substrate structure near the contact line determines the range of contact angles that can be stabilized, while the certain contact angle observed is closely related to the contact line width. PMID:25059292

  9. Multifractal analysis of visibility graph-based Ito-related connectivity time series.

    PubMed

    Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano

    2016-02-01

    In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.

  10. Statistical patterns of visual search for hidden objects

    PubMed Central

    Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.

    2012-01-01

    The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829

  11. Mathematics of gravitational lensing: multiple imaging and magnification

    NASA Astrophysics Data System (ADS)

    Petters, A. O.; Werner, M. C.

    2010-09-01

    The mathematical theory of gravitational lensing has revealed many generic and global properties. Beginning with multiple imaging, we review Morse-theoretic image counting formulas and lower bound results, and complex-algebraic upper bounds in the case of single and multiple lens planes. We discuss recent advances in the mathematics of stochastic lensing, discussing a general formula for the global expected number of minimum lensed images as well as asymptotic formulas for the probability densities of the microlensing random time delay functions, random lensing maps, and random shear, and an asymptotic expression for the global expected number of micro-minima. Multiple imaging in optical geometry and a spacetime setting are treated. We review global magnification relation results for model-dependent scenarios and cover recent developments on universal local magnification relations for higher order caustics.

  12. Random lasing in dye-doped polymer dispersed liquid crystal film

    NASA Astrophysics Data System (ADS)

    Wu, Rina; Shi, Rui-xin; Wu, Xiaojiao; Wu, Jie; Dai, Qin

    2016-09-01

    A dye-doped polymer-dispersed liquid crystal film was designed and fabricated, and random lasing action was studied. A mixture of laser dye, nematic liquid crystal, chiral dopant, and PVA was used to prepare the dye-doped polymer-dispersed liquid crystal film by means of microcapsules. Scanning electron microscopy analysis showed that most liquid crystal droplets in the polymer matrix ranged from 30 μm to 40 μm, the size of the liquid crystal droplets was small. Under frequency doubled 532 nm Nd:YAG laser-pumped optical excitation, a plurality of discrete and sharp random laser radiation peaks could be measured in the range of 575-590 nm. The line-width of the lasing peak was 0.2 nm and the threshold of the random lasing was 9 mJ. Under heating, the emission peaks of random lasing disappeared. By detecting the emission light spot energy distribution, the mechanism of radiation was found to be random lasing. The random lasing radiation mechanism was then analyzed and discussed. Experimental results indicated that the size of the liquid crystal droplets is the decisive factor that influences the lasing mechanism. The surface anchor role can be ignored when the size of the liquid crystal droplets in the polymer matrix is small, which is beneficial to form multiple scattering. The transmission path of photons is similar to that in a ring cavity, providing feedback to obtain random lasing output. Project supported by the National Natural Science Foundation of China (Grant No. 61378042), the Colleges and Universities in Liaoning Province Outstanding Young Scholars Growth Plans, China (Grant No. LJQ2015093), and Shenyang Ligong University Laser and Optical Information of Liaoning Province Key Laboratory Open Funds, China.

  13. Studies on energy transfer in dendrimer supermolecule using classical random walk model and Eyring model

    NASA Astrophysics Data System (ADS)

    Rana, Dipankar; Gangopadhyay, Gautam

    2003-01-01

    We have analyzed the energy transfer process in a dendrimer supermolecule using a classical random walk model and an Eyring model of membrane permeation. Here the energy transfer is considered as a multiple barrier crossing process by thermal hopping on the backbone of a cayley tree. It is shown that the mean residence time and mean first passage time, which involve explicit local escape rates, depend upon the temperature, size of the molecule, core branching, and the nature of the potential energy landscape along the cayley tree architecture. The effect of branching tries to create a uniform distribution of mean residence time over the generations and the distribution depends upon the interplay of funneling and local rates of transitions. The calculation of flux at the steady state from the Eyring model also gives a useful idea about the rate when the dendrimeric system is considered as an open system where the core is absorbing the transported energy like a photosynthetic reaction center and a continuous supply of external energy is maintained at the peripheral nodes. The effect of the above parameters of the system are shown to depend on the steady-state flux that has a qualitative resemblence with the result of the mean first passage time approach.

  14. Challenges to recruitment and retention of African Americans in the gene-environment trial of response to dietary interventions (GET READI) for heart health

    PubMed Central

    Kennedy, Betty M.; Harsha, David W.; Bookman, Ebony B.; Hill, Yolanda R.; Rankinen, Tuomo; Rodarte, Ruben Q.; Murla, Connie D.

    2011-01-01

    In this paper, challenges to recruiting African Americans specifically for a dietary feeding trial are examined, learning experiences gained and suggestions to overcome these challenges in future trials are discussed. A total of 333 individuals were randomized in the trial and 234 (167 sibling pairs and 67 parents/siblings) completed the dietary intervention and required DNA blood sampling for genetic analysis. The trial used multiple strategies for recruitment. Hand distributed letters and flyers through mass distribution at various churches resulted in the largest number (n = 153, 46%) of African Americans in the trial. Word of mouth accounted for the second largest number (n = 120, 36%) and included prior study participants. These two recruitment sources represented 82% (n = 273) of the total number of individuals randomized in GET READI. The remaining 18% (n = 60) consisted of a combination of sources including printed message on check stubs, newspaper articles, radio and TV appearances, screening events and presentations. Though challenging, the recruitment efforts for GET READI produced a significant number of African American participants despite the inability to complete the trial as planned because of low recruitment yields. Nevertheless, the recruitment process produced substantial numbers that successfully completed all study requirements. PMID:21865154

  15. An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-09-01

    Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  16. Three-dimensional distribution of random velocity inhomogeneities at the Nankai trough seismogenic zone

    NASA Astrophysics Data System (ADS)

    Takahashi, T.; Obana, K.; Yamamoto, Y.; Nakanishi, A.; Kaiho, Y.; Kodaira, S.; Kaneda, Y.

    2012-12-01

    The Nankai trough in southwestern Japan is a convergent margin where the Philippine sea plate is subducted beneath the Eurasian plate. There are major faults segments of huge earthquakes that are called Tokai, Tonankai and Nankai earthquakes. According to the earthquake occurrence history over the past hundreds years, we must expect various rupture patters such as simultaneous or nearly continuous ruptures of plural fault segments. Japan Agency for Marine-Earth Science and Technology (JAMSTEC) conducted seismic surveys at Nankai trough in order to clarify mutual relations between seismic structures and fault segments, as a part of "Research concerning Interaction Between the Tokai, Tonankai and Nankai Earthquakes" funded by Ministry of Education, Culture, Sports, Science and Technology, Japan. This study evaluated the spatial distribution of random velocity inhomogeneities from Hyuga-nada to Kii-channel by using velocity seismograms of small and moderate sized earthquakes. Random velocity inhomogeneities are estimated by the peak delay time analysis of S-wave envelopes (e.g., Takahashi et al. 2009). Peak delay time is defined as the time lag from the S-wave onset to its maximal amplitude arrival. This quantity mainly reflects the accumulated multiple forward scattering effect due to random inhomogeneities, and is quite insensitive to the inelastic attenuation. Peak delay times are measured from the rms envelopes of horizontal components at 4-8Hz, 8-16Hz and 16-32Hz. This study used the velocity seismograms that are recorded by 495 ocean bottom seismographs and 378 onshore seismic stations. Onshore stations are composed of the F-net and Hi-net stations that are maintained by National Research Institute for Earth Science and Disaster Prevention (NIED) of Japan. It is assumed that the random inhomogeneities are represented by the von Karman type PSDF. Preliminary result of inversion analysis shows that spectral gradient of PSDF (i.e., scale dependence of inhomogeneities) are the same overt the Nankai trough, but random inhomogeneities at smaller wavenumber shows anomalously large values at the southwestern part of Hyuga-nada and Kii-channel. Anomaly at Hyuga-nada is almost located at the subducted Kyushu Palau ridge. Similar random inhomogeneities were imaged near the remnant of ancient arc in the northern Izu-Bonin arc (Takahashi et al. 2011). We speculate these random inhomogeneities reflect the remnant of ancient volcanic activities. These results imply that random inhomogeneities at Kii-channel are possibly related to the subducted seamount, and that random inhomogeneities are useful to discuss the medium characteristics in subduction zone.

  17. A Multiplicative Cascade Model for High-Resolution Space-Time Downscaling of Rainfall

    NASA Astrophysics Data System (ADS)

    Raut, Bhupendra A.; Seed, Alan W.; Reeder, Michael J.; Jakob, Christian

    2018-02-01

    Distributions of rainfall with the time and space resolutions of minutes and kilometers, respectively, are often needed to drive the hydrological models used in a range of engineering, environmental, and urban design applications. The work described here is the first step in constructing a model capable of downscaling rainfall to scales of minutes and kilometers from time and space resolutions of several hours and a hundred kilometers. A multiplicative random cascade model known as the Short-Term Ensemble Prediction System is run with parameters from the radar observations at Melbourne (Australia). The orographic effects are added through multiplicative correction factor after the model is run. In the first set of model calculations, 112 significant rain events over Melbourne are simulated 100 times. Because of the stochastic nature of the cascade model, the simulations represent 100 possible realizations of the same rain event. The cascade model produces realistic spatial and temporal patterns of rainfall at 6 min and 1 km resolution (the resolution of the radar data), the statistical properties of which are in close agreement with observation. In the second set of calculations, the cascade model is run continuously for all days from January 2008 to August 2015 and the rainfall accumulations are compared at 12 locations in the greater Melbourne area. The statistical properties of the observations lie with envelope of the 100 ensemble members. The model successfully reproduces the frequency distribution of the 6 min rainfall intensities, storm durations, interarrival times, and autocorrelation function.

  18. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  19. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  20. Randomness versus specifics for word-frequency distributions

    NASA Astrophysics Data System (ADS)

    Yan, Xiaoyong; Minnhagen, Petter

    2016-02-01

    The text-length-dependence of real word-frequency distributions can be connected to the general properties of a random book. It is pointed out that this finding has strong implications, when deciding between two conceptually different views on word-frequency distributions, i.e. the specific 'Zipf's-view' and the non-specific 'Randomness-view', as is discussed. It is also noticed that the text-length transformation of a random book does have an exact scaling property precisely for the power-law index γ = 1, as opposed to the Zipf's exponent γ = 2 and the implication of this exact scaling property is discussed. However a real text has γ > 1 and as a consequence γ increases when shortening a real text. The connections to the predictions from the RGF (Random Group Formation) and to the infinite length-limit of a meta-book are also discussed. The difference between 'curve-fitting' and 'predicting' word-frequency distributions is stressed. It is pointed out that the question of randomness versus specifics for the distribution of outcomes in case of sufficiently complex systems has a much wider relevance than just the word-frequency example analyzed in the present work.

  1. Randomness determines practical security of BB84 quantum key distribution.

    PubMed

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-11-10

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system.

  2. Randomness determines practical security of BB84 quantum key distribution

    PubMed Central

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-01-01

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system. PMID:26552359

  3. Randomness determines practical security of BB84 quantum key distribution

    NASA Astrophysics Data System (ADS)

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-11-01

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system.

  4. The effects of noise due to random undetected tilts and paleosecular variation on regional paleomagnetic directions

    USGS Publications Warehouse

    Calderone, G.J.; Butler, R.F.

    1991-01-01

    Random tilting of a single paleomagnetic vector produces a distribution of vectors which is not rotationally symmetric about the original vector and therefore not Fisherian. Monte Carlo simulations were performed on two types of vector distributions: 1) distributions of vectors formed by perturbing a single original vector with a Fisher distribution of bedding poles (each defining a tilt correction) and 2) standard Fisher distributions. These simulations demonstrate that inclinations of vectors drawn from both distributions are biased toward shallow inclinations. The Fisher mean direction of the distribution of vectors formed by perturbing a single vector with random undetected tilts is biased toward shallow inclinations, but this bias is insignificant for angular dispersions of bedding poles less than 20??. -from Authors

  5. Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard.

    PubMed

    Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A

    2016-03-15

    The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.

  6. ASSISTments Dataset from Multiple Randomized Controlled Experiments

    ERIC Educational Resources Information Center

    Selent, Douglas; Patikorn, Thanaporn; Heffernan, Neil

    2016-01-01

    In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSISTments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time.…

  7. Patterns of particle distribution in multiparticle systems by random walks with memory enhancement and decay

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi

    2002-07-01

    We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.

  8. The invariant statistical rule of aerosol scattering pulse signal modulated by random noise

    NASA Astrophysics Data System (ADS)

    Yan, Zhen-gang; Bian, Bao-Min; Yang, Juan; Peng, Gang; Li, Zhen-hua

    2010-11-01

    A model of the random background noise acting on particle signals is established to study the impact of the background noise of the photoelectric sensor in the laser airborne particle counter on the statistical character of the aerosol scattering pulse signals. The results show that the noises broaden the statistical distribution of the particle's measurement. Further numerical research shows that the output of the signal amplitude still has the same distribution when the airborne particle with the lognormal distribution was modulated by random noise which has lognormal distribution. Namely it follows the statistics law of invariance. Based on this model, the background noise of photoelectric sensor and the counting distributions of random signal for aerosol's scattering pulse are obtained and analyzed by using a high-speed data acquisition card PCI-9812. It is found that the experiment results and simulation results are well consistent.

  9. A multiple imputation strategy for sequential multiple assignment randomized trials

    PubMed Central

    Shortreed, Susan M.; Laber, Eric; Stroup, T. Scott; Pineau, Joelle; Murphy, Susan A.

    2014-01-01

    Sequential multiple assignment randomized trials (SMARTs) are increasingly being used to inform clinical and intervention science. In a SMART, each patient is repeatedly randomized over time. Each randomization occurs at a critical decision point in the treatment course. These critical decision points often correspond to milestones in the disease process or other changes in a patient’s health status. Thus, the timing and number of randomizations may vary across patients and depend on evolving patient-specific information. This presents unique challenges when analyzing data from a SMART in the presence of missing data. This paper presents the first comprehensive discussion of missing data issues typical of SMART studies: we describe five specific challenges, and propose a flexible imputation strategy to facilitate valid statistical estimation and inference using incomplete data from a SMART. To illustrate these contributions, we consider data from the Clinical Antipsychotic Trial of Intervention and Effectiveness (CATIE), one of the most well-known SMARTs to date. PMID:24919867

  10. Can Targeted Intervention Mitigate Early Emotional and Behavioral Problems?: Generating Robust Evidence within Randomized Controlled Trials

    PubMed Central

    Doyle, Orla; McGlanaghy, Edel; O’Farrelly, Christine; Tremblay, Richard E.

    2016-01-01

    This study examined the impact of a targeted Irish early intervention program on children’s emotional and behavioral development using multiple methods to test the robustness of the results. Data on 164 Preparing for Life participants who were randomly assigned into an intervention group, involving home visits from pregnancy onwards, or a control group, was used to test the impact of the intervention on Child Behavior Checklist scores at 24-months. Using inverse probability weighting to account for differential attrition, permutation testing to address small sample size, and quantile regression to characterize the distributional impact of the intervention, we found that the few treatment effects were largely concentrated among boys most at risk of developing emotional and behavioral problems. The average treatment effect identified a 13% reduction in the likelihood of falling into the borderline clinical threshold for Total Problems. The interaction and subgroup analysis found that this main effect was driven by boys. The distributional analysis identified a 10-point reduction in the Externalizing Problems score for boys at the 90th percentile. No effects were observed for girls or for the continuous measures of Total, Internalizing, and Externalizing problems. These findings suggest that the impact of this prenatally commencing home visiting program may be limited to boys experiencing the most difficulties. Further adoption of the statistical methods applied here may help to improve the internal validity of randomized controlled trials and contribute to the field of evaluation science more generally. Trial Registration: ISRCTN Registry ISRCTN04631728 PMID:27253184

  11. Random matrices and condensation into multiple states

    NASA Astrophysics Data System (ADS)

    Sadeghi, Sina; Engel, Andreas

    2018-03-01

    In the present work, we employ methods from statistical mechanics of disordered systems to investigate static properties of condensation into multiple states in a general framework. We aim at showing how typical properties of random interaction matrices play a vital role in manifesting the statistics of condensate states. In particular, an analytical expression for the fraction of condensate states in the thermodynamic limit is provided that confirms the result of the mean number of coexisting species in a random tournament game. We also study the interplay between the condensation problem and zero-sum games with correlated random payoff matrices.

  12. An evolving model for the lodging-service network in a tourism destination

    NASA Astrophysics Data System (ADS)

    Hernández, Juan M.; González-Martel, Christian

    2017-09-01

    Tourism is a complex dynamic system including multiple actors which are related each other composing an evolving social network. This paper presents a growing model that explains how part of the supply components in a tourism system forms a social network. Specifically, the lodgings and services in a destination are the network nodes and a link between them appears if a representative tourist hosted in the lodging visits/consumes the service during his/her stay. The specific link between both categories are determined by a random and preferential attachment rule. The analytic results show that the long-term degree distribution of services follows a shifted power-law distribution. The numerical simulations show slight disagreements with the theoretical results in the case of the one-mode degree distribution of services, due to the low order of convergence to zero of X-motifs. The model predictions are compared with real data coming from a popular tourist destination in Gran Canaria, Spain, showing a good agreement between analytical and empirical data for the degree distribution of services. The theoretical model was validated assuming four type of perturbations in the real data.

  13. Statistical analysis of secondary particle distributions in relativistic nucleus-nucleus collisions

    NASA Technical Reports Server (NTRS)

    Mcguire, Stephen C.

    1987-01-01

    The use is described of several statistical techniques to characterize structure in the angular distributions of secondary particles from nucleus-nucleus collisions in the energy range 24 to 61 GeV/nucleon. The objective of this work was to determine whether there are correlations between emitted particle intensity and angle that may be used to support the existence of the quark gluon plasma. The techniques include chi-square null hypothesis tests, the method of discrete Fourier transform analysis, and fluctuation analysis. We have also used the method of composite unit vectors to test for azimuthal asymmetry in a data set of 63 JACEE-3 events. Each method is presented in a manner that provides the reader with some practical detail regarding its application. Of those events with relatively high statistics, Fe approaches 0 at 55 GeV/nucleon was found to possess an azimuthal distribution with a highly non-random structure. No evidence of non-statistical fluctuations was found in the pseudo-rapidity distributions of the events studied. It is seen that the most effective application of these methods relies upon the availability of many events or single events that possess very high multiplicities.

  14. Multiplicity distributions of shower particles and target fragments in 84 Kr 36-emulsion interactions at 1 GeV per nucleon

    NASA Astrophysics Data System (ADS)

    Singh, M. K.; Soma, A. K.; Pathak, Ramji; Singh, V.

    2014-03-01

    This article focuses on multiplicity distributions of shower particles and target fragments for interaction of 84 Kr 36 with NIKFI BR-2 nuclear emulsion target at kinetic energy of 1 GeV per nucleon. Experimental multiplicity distributions of shower particles, grey particles, black particles and heavily ionization particles are well described by multi-component Erlang distribution of multi-source thermal model. We have observed a linear correlation in multiplicities for the above mentioned particles or fragments. Further experimental studies have shown a saturation phenomenon in shower particle multiplicity with the increase of target fragment multiplicity.

  15. Random-Walk Type Model with Fat Tails for Financial Markets

    NASA Astrophysics Data System (ADS)

    Matuttis, Hans-Geors

    Starting from the random-walk model, practices of financial markets are included into the random-walk so that fat tail distributions like those in the high frequency data of the SP500 index are reproduced, though the individual mechanisms are modeled by normally distributed data. The incorporation of local correlation narrows the distribution for "frequent" events, whereas global correlations due to technical analysis leads to fat tails. Delay of market transactions in the trading process shifts the fat tail probabilities downwards. Such an inclusion of reactions to market fluctuations leads to mini-trends which are distributed with unit variance.

  16. A systematic examination of a random sampling strategy for source apportionment calculations.

    PubMed

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Method for simulating atmospheric turbulence phase effects for multiple time slices and anisoplanatic conditions.

    PubMed

    Roggemann, M C; Welsh, B M; Montera, D; Rhoadarmer, T A

    1995-07-10

    Simulating the effects of atmospheric turbulence on optical imaging systems is an important aspect of understanding the performance of these systems. Simulations are particularly important for understanding the statistics of some adaptive-optics system performance measures, such as the mean and variance of the compensated optical transfer function, and for understanding the statistics of estimators used to reconstruct intensity distributions from turbulence-corrupted image measurements. Current methods of simulating the performance of these systems typically make use of random phase screens placed in the system pupil. Methods exist for making random draws of phase screens that have the correct spatial statistics. However, simulating temporal effects and anisoplanatism requires one or more phase screens at different distances from the aperture, possibly moving with different velocities. We describe and demonstrate a method for creating random draws of phase screens with the correct space-time statistics for a bitrary turbulence and wind-velocity profiles, which can be placed in the telescope pupil in simulations. Results are provided for both the von Kármán and the Kolmogorov turbulence spectra. We also show how to simulate anisoplanatic effects with this technique.

  18. Efficient Numerical Methods for Nonlinear-Facilitated Transport and Exchange in a Blood-Tissue Exchange Unit

    PubMed Central

    Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.

    2010-01-01

    The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808

  19. Compensatory mutations cause excess of antagonistic epistasis in RNA secondary structure folding.

    PubMed

    Wilke, Claus O; Lenski, Richard E; Adami, Christoph

    2003-02-05

    The rate at which fitness declines as an organism's genome accumulates random mutations is an important variable in several evolutionary theories. At an intuitive level, it might seem natural that random mutations should tend to interact synergistically, such that the rate of mean fitness decline accelerates as the number of random mutations is increased. However, in a number of recent studies, a prevalence of antagonistic epistasis (the tendency of multiple mutations to have a mitigating rather than reinforcing effect) has been observed. We studied in silico the net amount and form of epistatic interactions in RNA secondary structure folding by measuring the fraction of neutral mutants as a function of mutational distance d. We found a clear prevalence of antagonistic epistasis in RNA secondary structure folding. By relating the fraction of neutral mutants at distance d to the average neutrality at distance d, we showed that this prevalence derives from the existence of many compensatory mutations at larger mutational distances. Our findings imply that the average direction of epistasis in simple fitness landscapes is directly related to the density with which fitness peaks are distributed in these landscapes.

  20. Compensatory mutations cause excess of antagonistic epistasis in RNA secondary structure folding

    PubMed Central

    Wilke, Claus O; Lenski, Richard E; Adami, Christoph

    2003-01-01

    Background The rate at which fitness declines as an organism's genome accumulates random mutations is an important variable in several evolutionary theories. At an intuitive level, it might seem natural that random mutations should tend to interact synergistically, such that the rate of mean fitness decline accelerates as the number of random mutations is increased. However, in a number of recent studies, a prevalence of antagonistic epistasis (the tendency of multiple mutations to have a mitigating rather than reinforcing effect) has been observed. Results We studied in silico the net amount and form of epistatic interactions in RNA secondary structure folding by measuring the fraction of neutral mutants as a function of mutational distance d. We found a clear prevalence of antagonistic epistasis in RNA secondary structure folding. By relating the fraction of neutral mutants at distance d to the average neutrality at distance d, we showed that this prevalence derives from the existence of many compensatory mutations at larger mutational distances. Conclusions Our findings imply that the average direction of epistasis in simple fitness landscapes is directly related to the density with which fitness peaks are distributed in these landscapes. PMID:12590655

  1. Random Visitor: Defense against Identity Attacks in P2P Networks

    NASA Astrophysics Data System (ADS)

    Gu, Jabeom; Nah, Jaehoon; Kwon, Hyeokchan; Jang, Jonsoo; Park, Sehyun

    Various advantages of cooperative peer-to-peer networks are strongly counterbalanced by the open nature of a distributed, serverless network. In such networks, it is relatively easy for an attacker to launch various attacks such as misrouting, corrupting, or dropping messages as a result of a successful identifier forgery. The impact of an identifier forgery is particularly severe because the whole network can be compromised by attacks such as Sybil or Eclipse. In this paper, we present an identifier authentication mechanism called random visitor, which uses one or more randomly selected peers as delegates of identity proof. Our scheme uses identity-based cryptography and identity ownership proof mechanisms collectively to create multiple, cryptographically protected indirect bindings between two peers, instantly when needed, through the delegates. Because of these bindings, an attacker cannot achieve an identifier forgery related attack against interacting peers without breaking the bindings. Therefore, our mechanism limits the possibility of identifier forgery attacks efficiently by disabling an attacker's ability to break the binding. The design rationale and framework details are presented. A security analysis shows that our scheme is strong enough against identifier related attacks and that the strength increases if there are many peers (more than several thousand) in the network.

  2. Distribution of shortest cycle lengths in random networks

    NASA Astrophysics Data System (ADS)

    Bonneau, Haggai; Hassid, Aviv; Biham, Ofer; Kühn, Reimer; Katzav, Eytan

    2017-12-01

    We present analytical results for the distribution of shortest cycle lengths (DSCL) in random networks. The approach is based on the relation between the DSCL and the distribution of shortest path lengths (DSPL). We apply this approach to configuration model networks, for which analytical results for the DSPL were obtained before. We first calculate the fraction of nodes in the network which reside on at least one cycle. Conditioning on being on a cycle, we provide the DSCL over ensembles of configuration model networks with degree distributions which follow a Poisson distribution (Erdős-Rényi network), degenerate distribution (random regular graph), and a power-law distribution (scale-free network). The mean and variance of the DSCL are calculated. The analytical results are found to be in very good agreement with the results of computer simulations.

  3. RAId_DbS: Peptide Identification using Database Searches with Realistic Statistics

    PubMed Central

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2007-01-01

    Background The key to mass-spectrometry-based proteomics is peptide identification. A major challenge in peptide identification is to obtain realistic E-values when assigning statistical significance to candidate peptides. Results Using a simple scoring scheme, we propose a database search method with theoretically characterized statistics. Taking into account possible skewness in the random variable distribution and the effect of finite sampling, we provide a theoretical derivation for the tail of the score distribution. For every experimental spectrum examined, we collect the scores of peptides in the database, and find good agreement between the collected score statistics and our theoretical distribution. Using Student's t-tests, we quantify the degree of agreement between the theoretical distribution and the score statistics collected. The T-tests may be used to measure the reliability of reported statistics. When combined with reported P-value for a peptide hit using a score distribution model, this new measure prevents exaggerated statistics. Another feature of RAId_DbS is its capability of detecting multiple co-eluted peptides. The peptide identification performance and statistical accuracy of RAId_DbS are assessed and compared with several other search tools. The executables and data related to RAId_DbS are freely available upon request. PMID:17961253

  4. Deep Learning Role in Early Diagnosis of Prostate Cancer

    PubMed Central

    Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman

    2018-01-01

    The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518

  5. A random Q-switched fiber laser

    PubMed Central

    Tang, Yulong; Xu, Jianqiu

    2015-01-01

    Extensive studies have been performed on random lasers in which multiple-scattering feedback is used to generate coherent emission. Q-switching and mode-locking are well-known routes for achieving high peak power output in conventional lasers. However, in random lasers, the ubiquitous random cavities that are formed by multiple scattering inhibit energy storage, making Q-switching impossible. In this paper, widespread Rayleigh scattering arising from the intrinsic micro-scale refractive-index irregularities of fiber cores is used to form random cavities along the fiber. The Q-factor of the cavity is rapidly increased by stimulated Brillouin scattering just after the spontaneous emission is enhanced by random cavity resonances, resulting in random Q-switched pulses with high brightness and high peak power. This report is the first observation of high-brightness random Q-switched laser emission and is expected to stimulate new areas of scientific research and applications, including encryption, remote three-dimensional random imaging and the simulation of stellar lasing. PMID:25797520

  6. Paretian Poisson Processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-05-01

    Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.

  7. Distribution of correlated spiking events in a population-based approach for Integrate-and-Fire networks.

    PubMed

    Zhang, Jiwei; Newhall, Katherine; Zhou, Douglas; Rangan, Aaditya

    2014-04-01

    Randomly connected populations of spiking neurons display a rich variety of dynamics. However, much of the current modeling and theoretical work has focused on two dynamical extremes: on one hand homogeneous dynamics characterized by weak correlations between neurons, and on the other hand total synchrony characterized by large populations firing in unison. In this paper we address the conceptual issue of how to mathematically characterize the partially synchronous "multiple firing events" (MFEs) which manifest in between these two dynamical extremes. We further develop a geometric method for obtaining the distribution of magnitudes of these MFEs by recasting the cascading firing event process as a first-passage time problem, and deriving an analytical approximation of the first passage time density valid for large neuron populations. Thus, we establish a direct link between the voltage distributions of excitatory and inhibitory neurons and the number of neurons firing in an MFE that can be easily integrated into population-based computational methods, thereby bridging the gap between homogeneous firing regimes and total synchrony.

  8. Statistical self-similarity of width function maxima with implications to floods

    USGS Publications Warehouse

    Veitzer, S.A.; Gupta, V.K.

    2001-01-01

    Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.

  9. Pharmacokinetics and Tissue Distribution Study of Chlorogenic Acid from Lonicerae Japonicae Flos Following Oral Administrations in Rats

    PubMed Central

    Zhou, Yulu; Zhou, Ting; Pei, Qi; Liu, Shikun; Yuan, Hong

    2014-01-01

    Chlorogenic acid (ChA) is proposed as the major bioactive compounds of Lonicerae Japonicae Flos (LJF). Forty-two Wistar rats were randomly divided into seven groups to investigate the pharmacokinetics and tissue distribution of ChA, via oral administration of LJF extract, using ibuprofen as internal standard, employing a high performance liquid chromatography in conjunction with tandem mass spectrometry. Analytes were extracted from plasma samples and tissue homogenate by liquid–liquid extraction with acetonitrile, separated on a C 18 column by linear gradient elution, and detected by electrospray ionization mass spectrometry in negative selected multiple reaction monitoring mode. Our results successfully demonstrate that the method has satisfactory selectivity, linearity, extraction recovery, matrix effect, precision, accuracy, and stability. Using noncompartment model to study pharmacokinetics, profile revealed that ChA was rapidly absorbed and eliminated. Tissue study indicated that the highest level was observed in liver, followed by kidney, lung, heart, and spleen. In conclusion, this method was suitable for the study on pharmacokinetics and tissue distribution of ChA after oral administration. PMID:25140190

  10. Alignment between Protostellar Outflows and Filamentary Structure

    NASA Astrophysics Data System (ADS)

    Stephens, Ian W.; Dunham, Michael M.; Myers, Philip C.; Pokhrel, Riwaj; Sadavoy, Sarah I.; Vorobyov, Eduard I.; Tobin, John J.; Pineda, Jaime E.; Offner, Stella S. R.; Lee, Katherine I.; Kristensen, Lars E.; Jørgensen, Jes K.; Goodman, Alyssa A.; Bourke, Tyler L.; Arce, Héctor G.; Plunkett, Adele L.

    2017-09-01

    We present new Submillimeter Array (SMA) observations of CO(2-1) outflows toward young, embedded protostars in the Perseus molecular cloud as part of the Mass Assembly of Stellar Systems and their Evolution with the SMA (MASSES) survey. For 57 Perseus protostars, we characterize the orientation of the outflow angles and compare them with the orientation of the local filaments as derived from Herschel observations. We find that the relative angles between outflows and filaments are inconsistent with purely parallel or purely perpendicular distributions. Instead, the observed distribution of outflow-filament angles are more consistent with either randomly aligned angles or a mix of projected parallel and perpendicular angles. A mix of parallel and perpendicular angles requires perpendicular alignment to be more common by a factor of ˜3. Our results show that the observed distributions probably hold regardless of the protostar’s multiplicity, age, or the host core’s opacity. These observations indicate that the angular momentum axis of a protostar may be independent of the large-scale structure. We discuss the significance of independent protostellar rotation axes in the general picture of filament-based star formation.

  11. Time-dependent Fracture Behaviour of Polyampholyte Hydrogels

    NASA Astrophysics Data System (ADS)

    Sun, Tao Lin; Luo, Feng; Nakajima, Tasuku; Kurokawa, Takayuki; Gong, Jian Ping

    Recently, we report that polyampholytes, polymers bearing randomly dispersed cationic and anionic repeat groups, form tough and self-healing hydrogels with excellent multiple mechanical functions. The randomness makes ionic bonds with a wide distribution of strength, via inter and intra chain complexation. As the breaking and reforming of ionic bonds are time dependent, the hydrogels exhibit rate dependent mechanical behaviour. We systematically studied the tearing energy by tearing test with various tearing velocity under different temperature, and the linear viscoelastic behaviour over a wide range of frequency and temperature. Results have shown that the tearing energy markedly increase with the crack velocity and decrease with the measured temperature. In accordance with the prediction of Williams, Landel, and Ferry (WLF) rate-temperature equivalence, a master curve of tearing energy dependence of crack velocity can be well constructed using the same shift factor from the linear viscoelastic data. The scaling relation of tearing energy as a function of crack velocity can be predicted well by the rheological data according to the developed linear fracture mechanics.

  12. Alignment of gold nanorods by angular photothermal depletion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Adam B.; Chow, Timothy T. Y.; Chon, James W. M., E-mail: jchon@swin.edu.au

    2014-02-24

    In this paper, we demonstrate that a high degree of alignment can be imposed upon randomly oriented gold nanorod films by angular photothermal depletion with linearly polarized laser irradiation. The photothermal reshaping of gold nanorods is observed to follow quadratic melting model rather than the threshold melting model, which distorts the angular and spectral hole created on 2D distribution map of nanorods to be an open crater shape. We have accounted these observations to the alignment procedures and demonstrated good agreement between experiment and simulations. The use of multiple laser depletion wavelengths allowed alignment criteria over a large range ofmore » aspect ratios, achieving 80% of the rods in the target angular range. We extend the technique to demonstrate post-alignment in a multilayer of randomly oriented gold nanorod films, with arbitrary control of alignment shown across the layers. Photothermal angular depletion alignment of gold nanorods is a simple, promising post-alignment method for creating future 3D or multilayer plasmonic nanorod based devices and structures.« less

  13. Taxonomy for colorectal cancer screening promotion: Lessons from recent randomized controlled trials.

    PubMed

    Ritvo, Paul; Myers, Ronald E; Serenity, Mardie; Gupta, Samir; Inadomi, John M; Green, Beverly B; Jerant, Anthony; Tinmouth, Jill; Paszat, Lawrence; Pirbaglou, Meysam; Rabeneck, Linda

    2017-08-01

    To derive a taxonomy for colorectal cancer screening that advances Randomized Controlled Trials (RCTs) and screening uptake. Detailed publication review, multiple interviews with principal investigators (PIs) and collaboration with PIs as co-authors produced a CRCS intervention taxonomy. Semi-structured interview questions with PIs (Drs. Inadomi, Myers, Green, Gupta, Jerant and Ritvo) yielded details about trial conduct. Interview comparisons led to an iterative process informing serial interviews until a consensus was obtained on final taxonomy structure. These taxonomy headings (Engagement Sponsor, Population Targeted, Alternative Screening Tests, Delivery Methods, and Support for Test Performance (EPADS)) were used to compare studies. Exemplary insights emphasized: 1) direct test delivery to patients; 2) linguistic-ethnic matching of staff to minority subjects; and 3) authorization of navigators to schedule or refer for colonoscopies and/or distribute stool blood tests during screening promotion. PIs of key RCTs (2012-2015) derived a CRCS taxonomy useful in detailed examination of CRCS promotion and design of future RCTs. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. REVIEWS OF TOPICAL PROBLEMS: Transition radiation in media with random inhomogeneities

    NASA Astrophysics Data System (ADS)

    Platonov, Konstantin Yu; Fleishman, G. D.

    2002-03-01

    This review analyzes radiation produced by randomly inhomogeneous media excited by fast particles — i.e., polarization bremsstrahlung for thermodynamically equilibrium inhomogeneities or transition radiation for nonthermal ones — taking into account all the effects important for natural sources. Magnetic field effects on both the motion of fast particles and the dispersion of background plasma are considered, and the multiple scattering of fast particles in the medium is examined. Various resonant effects occurring under the conditions of Cherenkov (or cyclotron) emission for a particular eigenmode are discussed. The transition radiation intensity and absorption (amplification) coefficients are calculated for ensembles of fast particles with realistic distributions over momentum and angles. The value of the developed theory of transition radiation is illustrated by applying it to astrophysical objects. Transition radiation is shown to contribute significantly to the radio emission of the Sun, planets (including Earth), and interplanetary and interstellar media. Possible further applications of transition radiation (particularly stimulated) are discussed.

  15. University Students' Conceptual Knowledge of Randomness and Probability in the Contexts of Evolution and Mathematics.

    PubMed

    Fiedler, Daniela; Tröbst, Steffen; Harms, Ute

    2017-01-01

    Students of all ages face severe conceptual difficulties regarding key aspects of evolution-the central, unifying, and overarching theme in biology. Aspects strongly related to abstract "threshold" concepts like randomness and probability appear to pose particular difficulties. A further problem is the lack of an appropriate instrument for assessing students' conceptual knowledge of randomness and probability in the context of evolution. To address this problem, we have developed two instruments, Ra ndomness and Pro bability Test in the Context of Evo lution (RaProEvo) and Ra ndomness and Pro bability Test in the Context of Math ematics (RaProMath), that include both multiple-choice and free-response items. The instruments were administered to 140 university students in Germany, then the Rasch partial-credit model was applied to assess them. The results indicate that the instruments generate reliable and valid inferences about students' conceptual knowledge of randomness and probability in the two contexts (which are separable competencies). Furthermore, RaProEvo detected significant differences in knowledge of randomness and probability, as well as evolutionary theory, between biology majors and preservice biology teachers. © 2017 D. Fiedler et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  16. Hybrid computer technique yields random signal probability distributions

    NASA Technical Reports Server (NTRS)

    Cameron, W. D.

    1965-01-01

    Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.

  17. Solar wind driving and substorm triggering

    NASA Astrophysics Data System (ADS)

    Newell, Patrick T.; Liou, Kan

    2011-03-01

    We compare solar wind driving and its changes for three data sets: (1) 4861 identifications of substorm onsets from satellite global imagers (Polar UVI and IMAGE FUV); (2) a similar number of otherwise random times chosen with a similar solar wind distribution (slightly elevated driving); (3) completely random times. Multiple measures of solar wind driving were used, including interplanetary magnetic field (IMF) Bz, the Kan-Lee electric field, the Borovsky function, and dΦMP/dt (all of which estimate dayside merging). Superposed epoch analysis verifies that the mean Bz has a northward turning (or at least averages less southward) starting 20 min before onset. We argue that the delay between IMF impact on the magnetopause and tail effects appearing in the ionosphere is about that long. The northward turning is not the effect of a few extreme events. The median field shows the same result, as do all other measures of solar wind driving. We compare the rate of northward turning to that observed after random times with slightly elevated driving. The subsequent reversion to mean is essentially the same between random elevations and substorms. To further verify this, we consider in detail the distribution of changes from the statistical peak (20 min prior to onset) to onset. For Bz, the mean change after onset is +0.14 nT (i.e., IMF becomes more northward), but the standard deviation is σ = 2.8 nT. Thus large changes in either direction are common. For EKL, the change is -15 nT km/s ± 830 nT km/s. Thus either a hypothesis predicting northward turnings or one predicting southward turnings would find abundant yet random confirming examples. Indeed, applying the Lyons et al. (1997) trigger criteria (excluding only the prior requirement of 22/30 min Bz < 0, which is often not valid for actual substorms) to these three sets of data shows that "northward turning triggers" occur in 23% of the random data, 24% of the actual substorms, and after 27% of the random elevations. These results strongly support the idea of Morley and Freeman (2007), that substorms require initial elevated solar wind driving, but that there is no evidence for external triggering. Finally dynamic pressure, p, and velocity, v, show no meaningful variation around onset (although p averages 10% above an 11 year mean).

  18. Predicting the distribution and ecological niche of unexploited snow crab (Chionoecetes opilio) populations in Alaskan waters: a first open-access ensemble model.

    PubMed

    Hardy, Sarah M; Lindgren, Michael; Konakanchi, Hanumantharao; Huettmann, Falk

    2011-10-01

    Populations of the snow crab (Chionoecetes opilio) are widely distributed on high-latitude continental shelves of the North Pacific and North Atlantic, and represent a valuable resource in both the United States and Canada. In US waters, snow crabs are found throughout the Arctic and sub-Arctic seas surrounding Alaska, north of the Aleutian Islands, yet commercial harvest currently focuses on the more southerly population in the Bering Sea. Population dynamics are well-monitored in exploited areas, but few data exist for populations further north where climate trends in the Arctic appear to be affecting species' distributions and community structure on multiple trophic levels. Moreover, increased shipping traffic, as well as fisheries and petroleum resource development, may add additional pressures in northern portions of the range as seasonal ice cover continues to decline. In the face of these pressures, we examined the ecological niche and population distribution of snow crabs in Alaskan waters using a GIS-based spatial modeling approach. We present the first quantitative open-access model predictions of snow-crab distribution, abundance, and biomass in the Chukchi and Beaufort Seas. Multi-variate analysis of environmental drivers of species' distribution and community structure commonly rely on multiple linear regression methods. The spatial modeling approach employed here improves upon linear regression methods in allowing for exploration of nonlinear relationships and interactions between variables. Three machine-learning algorithms were used to evaluate relationships between snow-crab distribution and environmental parameters, including TreeNet, Random Forests, and MARS. An ensemble model was then generated by combining output from these three models to generate consensus predictions for presence-absence, abundance, and biomass of snow crabs. Each algorithm identified a suite of variables most important in predicting snow-crab distribution, including nutrient and chlorophyll-a concentrations in overlying waters, temperature, salinity, and annual sea-ice cover; this information may be used to develop and test hypotheses regarding the ecology of this species. This is the first such quantitative model for snow crabs, and all GIS-data layers compiled for this project are freely available from the authors, upon request, for public use and improvement.

  19. Neuronal Assemblies Evidence Distributed Interactions within a Tactile Discrimination Task in Rats

    PubMed Central

    Deolindo, Camila S.; Kunicki, Ana C. B.; da Silva, Maria I.; Lima Brasil, Fabrício; Moioli, Renan C.

    2018-01-01

    Accumulating evidence suggests that neural interactions are distributed and relate to animal behavior, but many open questions remain. The neural assembly hypothesis, formulated by Hebb, states that synchronously active single neurons may transiently organize into functional neural circuits—neuronal assemblies (NAs)—and that would constitute the fundamental unit of information processing in the brain. However, the formation, vanishing, and temporal evolution of NAs are not fully understood. In particular, characterizing NAs in multiple brain regions over the course of behavioral tasks is relevant to assess the highly distributed nature of brain processing. In the context of NA characterization, active tactile discrimination tasks with rats are elucidative because they engage several cortical areas in the processing of information that are otherwise masked in passive or anesthetized scenarios. In this work, we investigate the dynamic formation of NAs within and among four different cortical regions in long-range fronto-parieto-occipital networks (primary somatosensory, primary visual, prefrontal, and posterior parietal cortices), simultaneously recorded from seven rats engaged in an active tactile discrimination task. Our results first confirm that task-related neuronal firing rate dynamics in all four regions is significantly modulated. Notably, a support vector machine decoder reveals that neural populations contain more information about the tactile stimulus than the majority of single neurons alone. Then, over the course of the task, we identify the emergence and vanishing of NAs whose participating neurons are shown to contain more information about animal behavior than randomly chosen neurons. Taken together, our results further support the role of multiple and distributed neurons as the functional unit of information processing in the brain (NA hypothesis) and their link to active animal behavior. PMID:29375324

  20. Notes on SAW Tag Interrogation Techniques

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.

    2010-01-01

    We consider the problem of interrogating a single SAW RFID tag with a known ID and known range in the presence of multiple interfering tags under the following assumptions: (1) The RF propagation environment is well approximated as a simple delay channel with geometric power-decay constant alpha >/= 2. (2) The interfering tag IDs are unknown but well approximated as independent, identically distributed random samples from a probability distribution of tag ID waveforms with known second-order properties, and the tag of interest is drawn independently from the same distribution. (3) The ranges of the interfering tags are unknown but well approximated as independent, identically distributed realizations of a random variable rho with a known probability distribution f(sub rho) , and the tag ranges are independent of the tag ID waveforms. In particular, we model the tag waveforms as random impulse responses from a wide-sense-stationary, uncorrelated-scattering (WSSUS) fading channel with known bandwidth and scattering function. A brief discussion of the properties of such channels and the notation used to describe them in this document is given in the Appendix. Under these assumptions, we derive the expression for the output signal-to-noise ratio (SNR) for an arbitrary combination of transmitted interrogation signal and linear receiver filter. Based on this expression, we derive the optimal interrogator configuration (i.e., transmitted signal/receiver filter combination) in the two extreme noise/interference regimes, i.e., noise-limited and interference-limited, under the additional assumption that the coherence bandwidth of the tags is much smaller than the total tag bandwidth. Finally, we evaluate the performance of both optimal interrogators over a broad range of operating scenarios using both numerical simulation based on the assumed model and Monte Carlo simulation based on a small sample of measured tag waveforms. The performance evaluation results not only provide guidelines for proper interrogator design, but also provide some insight on the validity of the assumed signal model. It should be noted that the assumption that the impulse response of the tag of interest is known precisely implies that the temperature and range of the tag are also known precisely, which is generally not the case in practice. However, analyzing interrogator performance under this simplifying assumption is much more straightforward and still provides a great deal of insight into the nature of the problem.

  1. Distributed data fusion across multiple hard and soft mobile sensor platforms

    NASA Astrophysics Data System (ADS)

    Sinsley, Gregory

    One of the biggest challenges currently facing the robotics field is sensor data fusion. Unmanned robots carry many sophisticated sensors including visual and infrared cameras, radar, laser range finders, chemical sensors, accelerometers, gyros, and global positioning systems. By effectively fusing the data from these sensors, a robot would be able to form a coherent view of its world that could then be used to facilitate both autonomous and intelligent operation. Another distinct fusion problem is that of fusing data from teammates with data from onboard sensors. If an entire team of vehicles has the same worldview they will be able to cooperate much more effectively. Sharing worldviews is made even more difficult if the teammates have different sensor types. The final fusion challenge the robotics field faces is that of fusing data gathered by robots with data gathered by human teammates (soft sensors). Humans sense the world completely differently from robots, which makes this problem particularly difficult. The advantage of fusing data from humans is that it makes more information available to the entire team, thus helping each agent to make the best possible decisions. This thesis presents a system for fusing data from multiple unmanned aerial vehicles, unmanned ground vehicles, and human observers. The first issue this thesis addresses is that of centralized data fusion. This is a foundational data fusion issue, which has been very well studied. Important issues in centralized fusion include data association, classification, tracking, and robotics problems. Because these problems are so well studied, this thesis does not make any major contributions in this area, but does review it for completeness. The chapter on centralized fusion concludes with an example unmanned aerial vehicle surveillance problem that demonstrates many of the traditional fusion methods. The second problem this thesis addresses is that of distributed data fusion. Distributed data fusion is a younger field than centralized fusion. The main issues in distributed fusion that are addressed are distributed classification and distributed tracking. There are several well established methods for performing distributed fusion that are first reviewed. The chapter on distributed fusion concludes with a multiple unmanned vehicle collaborative test involving an unmanned aerial vehicle and an unmanned ground vehicle. The third issue this thesis addresses is that of soft sensor only data fusion. Soft-only fusion is a newer field than centralized or distributed hard sensor fusion. Because of the novelty of the field, the chapter on soft only fusion contains less background information and instead focuses on some new results in soft sensor data fusion. Specifically, it discusses a novel fuzzy logic based soft sensor data fusion method. This new method is tested using both simulations and field measurements. The biggest issue addressed in this thesis is that of combined hard and soft fusion. Fusion of hard and soft data is the newest area for research in the data fusion community; therefore, some of the largest theoretical contributions in this thesis are in the chapter on combined hard and soft fusion. This chapter presents a novel combined hard and soft data fusion method based on random set theory, which processes random set data using a particle filter. Furthermore, the particle filter is designed to be distributed across multiple robots and portable computers (used by human observers) so that there is no centralized failure point in the system. After laying out a theoretical groundwork for hard and soft sensor data fusion the thesis presents practical applications for hard and soft sensor data fusion in simulation. Through a series of three progressively more difficult simulations, some important hard and soft sensor data fusion capabilities are demonstrated. The first simulation demonstrates fusing data from a single soft sensor and a single hard sensor in order to track a car that could be driving normally or erratically. The second simulation adds the extra complication of classifying the type of target to the simulation. The third simulation uses multiple hard and soft sensors, with a limited field of view, to track a moving target and classify it as a friend, foe, or neutral. The final chapter builds on the work done in previous chapters by performing a field test of the algorithms for hard and soft sensor data fusion. The test utilizes an unmanned aerial vehicle, an unmanned ground vehicle, and a human observer with a laptop. The test is designed to mimic a collaborative human and robot search and rescue problem. This test makes some of the most important practical contributions of the thesis by showing that the algorithms that have been developed for hard and soft sensor data fusion are capable of running in real time on relatively simple hardware.

  2. Multiple scattering and the density distribution of a Cs MOT.

    PubMed

    Overstreet, K; Zabawa, P; Tallant, J; Schwettmann, A; Shaffer, J

    2005-11-28

    Multiple scattering is studied in a Cs magneto-optical trap (MOT). We use two Abel inversion algorithms to recover density distributions of the MOT from fluorescence images. Deviations of the density distribution from a Gaussian are attributed to multiple scattering.

  3. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  4. Multiple-trait structured antedependence model to study the relationship between litter size and birth weight in pigs and rabbits.

    PubMed

    David, Ingrid; Garreau, Hervé; Balmisse, Elodie; Billon, Yvon; Canario, Laurianne

    2017-01-20

    Some genetic studies need to take into account correlations between traits that are repeatedly measured over time. Multiple-trait random regression models are commonly used to analyze repeated traits but suffer from several major drawbacks. In the present study, we developed a multiple-trait extension of the structured antedependence model (SAD) to overcome this issue and validated its usefulness by modeling the association between litter size (LS) and average birth weight (ABW) over parities in pigs and rabbits. The single-trait SAD model assumes that a random effect at time [Formula: see text] can be explained by the previous values of the random effect (i.e. at previous times). The proposed multiple-trait extension of the SAD model consists in adding a cross-antedependence parameter to the single-trait SAD model. This model can be easily fitted using ASReml and the OWN Fortran program that we have developed. In comparison with the random regression model, we used our multiple-trait SAD model to analyze the LS and ABW of 4345 litters from 1817 Large White sows and 8706 litters from 2286 L-1777 does over a maximum of five successive parities. For both species, the multiple-trait SAD fitted the data better than the random regression model. The difference between AIC of the two models (AIC_random regression-AIC_SAD) were equal to 7 and 227 for pigs and rabbits, respectively. A similar pattern of heritability and correlation estimates was obtained for both species. Heritabilities were lower for LS (ranging from 0.09 to 0.29) than for ABW (ranging from 0.23 to 0.39). The general trend was a decrease of the genetic correlation for a given trait between more distant parities. Estimates of genetic correlations between LS and ABW were negative and ranged from -0.03 to -0.52 across parities. No correlation was observed between the permanent environmental effects, except between the permanent environmental effects of LS and ABW of the same parity, for which the estimate of the correlation was strongly negative (ranging from -0.57 to -0.67). We demonstrated that application of our multiple-trait SAD model is feasible for studying several traits with repeated measurements and showed that it provided a better fit to the data than the random regression model.

  5. Cranberry versus placebo in the prevention of urinary infections in multiple sclerosis: a multicenter, randomized, placebo-controlled, double-blind trial.

    PubMed

    Gallien, Philippe; Amarenco, Gérard; Benoit, Nicolas; Bonniaud, Véronique; Donzé, Cécile; Kerdraon, Jacques; de Seze, Marianne; Denys, Pierre; Renault, Alain; Naudet, Florian; Reymann, Jean Michel

    2014-08-01

    Our aim was to assess the usefulness of cranberry extract in multiple sclerosis (MS) patients suffering from urinary disorders. In total, 171 adult MS outpatients with urinary disorders presenting at eight centers were randomized (stratification according to center and use of clean intermittent self-catheterization) to cranberry versus placebo in a 1-year, prospective, double-blind study that was analyzed using a sequential method on an intent-to-treat basis. An independent monitoring board analyzed the results of the analyses each time 40 patients were assessed on the main endpoint. Cranberry extract (36 mg proanthocyanidins per day) or a matching placebo was taken by participants twice daily for 1 year. The primary endpoint was the time to first symptomatic urinary tract infection (UTI), subject to validation by a validation committee. The second sequential analyses allowed us to accept the null hypothesis (no difference between cranberry and placebo). There was no difference in time to first symptomatic UTI distribution across 1 year, with an estimated hazard ratio of 0.99, 95% CI [0.61, 1.60] (p = 0.97). Secondary endpoints and tolerance did not differ between groups. Taking cranberry extract versus placebo twice a day did not prevent UTI occurrence in MS patients with urinary disorders. Trial Registration NCT00280592. © The Author(s) 2014.

  6. Random walks with random velocities.

    PubMed

    Zaburdaev, Vasily; Schmiedeberg, Michael; Stark, Holger

    2008-07-01

    We consider a random walk model that takes into account the velocity distribution of random walkers. Random motion with alternating velocities is inherent to various physical and biological systems. Moreover, the velocity distribution is often the first characteristic that is experimentally accessible. Here, we derive transport equations describing the dispersal process in the model and solve them analytically. The asymptotic properties of solutions are presented in the form of a phase diagram that shows all possible scaling regimes, including superdiffusive, ballistic, and superballistic motion. The theoretical results of this work are in excellent agreement with accompanying numerical simulations.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wulff, J; Huggins, A

    Purpose: The shape of a single beam in proton PBS influences the resulting dose distribution. Spot profiles are modelled as two-dimensional Gaussian (single/ double) distributions in treatment planning systems (TPS). Impact of slight deviations from an ideal Gaussian on resulting dose distributions is typically assumed to be small due to alleviation by multiple Coulomb scattering (MCS) in tissue and superposition of many spots. Quantitative limits are however not clear per se. Methods: A set of 1250 deliberately deformed profiles with sigma=4 mm for a Gaussian fit were constructed. Profiles and fit were normalized to the same area, resembling output calibrationmore » in the TPS. Depth-dependent MCS was considered. The deviation between deformed and ideal profiles was characterized by root-mean-squared deviation (RMSD), skewness/ kurtosis (SK) and full-width at different percentage of maximum (FWxM). The profiles were convolved with different fluence patterns (regular/ random) resulting in hypothetical dose distributions. The resulting deviations were analyzed by applying a gamma-test. Results were compared to measured spot profiles. Results: A clear correlation between pass-rate and profile metrics could be determined. The largest impact occurred for a regular fluence-pattern with increasing distance between single spots, followed by a random distribution of spot weights. The results are strongly dependent on gamma-analysis dose and distance levels. Pass-rates of >95% at 2%/2 mm and 40 mm depth (=70 MeV) could only be achieved for RMSD<10%, deviation in FWxM at 20% and root of quadratic sum of SK <0.8. As expected the results improve for larger depths. The trends were well resembled for measured spot profiles. Conclusion: All measured profiles from ProBeam sites passed the criteria. Given the fact, that beam-line tuning can result shape distortions, the derived criteria represent a useful QA tool for commissioning and design of future beam-line optics.« less

  8. Modeling species-abundance relationships in multi-species collections

    USGS Publications Warehouse

    Peng, S.; Yin, Z.; Ren, H.; Guo, Q.

    2003-01-01

    Species-abundance relationship is one of the most fundamental aspects of community ecology. Since Motomura first developed the geometric series model to describe the feature of community structure, ecologists have developed many other models to fit the species-abundance data in communities. These models can be classified into empirical and theoretical ones, including (1) statistical models, i.e., negative binomial distribution (and its extension), log-series distribution (and its extension), geometric distribution, lognormal distribution, Poisson-lognormal distribution, (2) niche models, i.e., geometric series, broken stick, overlapping niche, particulate niche, random assortment, dominance pre-emption, dominance decay, random fraction, weighted random fraction, composite niche, Zipf or Zipf-Mandelbrot model, and (3) dynamic models describing community dynamics and restrictive function of environment on community. These models have different characteristics and fit species-abundance data in various communities or collections. Among them, log-series distribution, lognormal distribution, geometric series, and broken stick model have been most widely used.

  9. Does prism width from the shell prismatic layer have a random distribution?

    NASA Astrophysics Data System (ADS)

    Vancolen, Séverine; Verrecchia, Eric

    2008-10-01

    A study of the distribution of the prism width inside the prismatic layer of Unio tumidus (Philipsson 1788, Diss Hist-Nat, Berling, Lundæ) from Lake Neuchâtel, Switzerland, has been conducted in order to determine whether or not this distribution is random. Measurements of 954 to 1,343 prism widths (depending on shell sample) have been made using a scanning electron microscope in backscattered electron mode. A white noise test has been applied to the distribution of prism sizes (i.e. width). It shows that there is no temporal cycle that could potentially influence their formation and growth. These results suggest that prism widths are randomly distributed, and related neither to external rings nor to environmental constraints.

  10. Super-resolving random-Gaussian apodized photon sieve.

    PubMed

    Sabatyan, Arash; Roshaninejad, Parisa

    2012-09-10

    A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.

  11. Spatial Distribution of Phase Singularities in Optical Random Vector Waves.

    PubMed

    De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L

    2016-08-26

    Phase singularities are dislocations widely studied in optical fields as well as in other areas of physics. With experiment and theory we show that the vectorial nature of light affects the spatial distribution of phase singularities in random light fields. While in scalar random waves phase singularities exhibit spatial distributions reminiscent of particles in isotropic liquids, in vector fields their distribution for the different vector components becomes anisotropic due to the direct relation between propagation and field direction. By incorporating this relation in the theory for scalar fields by Berry and Dennis [Proc. R. Soc. A 456, 2059 (2000)], we quantitatively describe our experiments.

  12. Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, David O.

    2007-01-01

    A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less

  13. Two new methods to fit models for network meta-analysis with random inconsistency effects.

    PubMed

    Law, Martin; Jackson, Dan; Turner, Rebecca; Rhodes, Kirsty; Viechtbauer, Wolfgang

    2016-07-28

    Meta-analysis is a valuable tool for combining evidence from multiple studies. Network meta-analysis is becoming more widely used as a means to compare multiple treatments in the same analysis. However, a network meta-analysis may exhibit inconsistency, whereby the treatment effect estimates do not agree across all trial designs, even after taking between-study heterogeneity into account. We propose two new estimation methods for network meta-analysis models with random inconsistency effects. The model we consider is an extension of the conventional random-effects model for meta-analysis to the network meta-analysis setting and allows for potential inconsistency using random inconsistency effects. Our first new estimation method uses a Bayesian framework with empirically-based prior distributions for both the heterogeneity and the inconsistency variances. We fit the model using importance sampling and thereby avoid some of the difficulties that might be associated with using Markov Chain Monte Carlo (MCMC). However, we confirm the accuracy of our importance sampling method by comparing the results to those obtained using MCMC as the gold standard. The second new estimation method we describe uses a likelihood-based approach, implemented in the metafor package, which can be used to obtain (restricted) maximum-likelihood estimates of the model parameters and profile likelihood confidence intervals of the variance components. We illustrate the application of the methods using two contrasting examples. The first uses all-cause mortality as an outcome, and shows little evidence of between-study heterogeneity or inconsistency. The second uses "ear discharge" as an outcome, and exhibits substantial between-study heterogeneity and inconsistency. Both new estimation methods give results similar to those obtained using MCMC. The extent of heterogeneity and inconsistency should be assessed and reported in any network meta-analysis. Our two new methods can be used to fit models for network meta-analysis with random inconsistency effects. They are easily implemented using the accompanying R code in the Additional file 1. Using these estimation methods, the extent of inconsistency can be assessed and reported.

  14. Random distributed feedback fiber laser at 2.1  μm.

    PubMed

    Jin, Xiaoxi; Lou, Zhaokai; Zhang, Hanwei; Xu, Jiangming; Zhou, Pu; Liu, Zejin

    2016-11-01

    We demonstrate a random distributed feedback fiber laser at 2.1 μm. A high-power pulsed Tm-doped fiber laser operating at 1.94 μm with a temporal duty ratio of 30% was employed as a pump laser to increase the equivalent incident pump power. A piece of 150 m highly GeO2-doped silica fiber that provides a strong Raman gain and random distributed feedbacks was used to act as the gain medium. The maximum output power reached 0.5 W with the optical efficiency of 9%, which could be further improved by more pump power and optimized fiber length. To the best of our knowledge, this is the first demonstration of random distributed feedback fiber laser at 2 μm band based on Raman gain.

  15. Modeling tracer transport in randomly heterogeneous porous media by nonlocal moment equations: Anomalous transport

    NASA Astrophysics Data System (ADS)

    Morales-Casique, E.; Lezama-Campos, J. L.; Guadagnini, A.; Neuman, S. P.

    2013-05-01

    Modeling tracer transport in geologic porous media suffers from the corrupt characterization of the spatial distribution of hydrogeologic properties of the system and the incomplete knowledge of processes governing transport at multiple scales. Representations of transport dynamics based on a Fickian model of the kind considered in the advection-dispersion equation (ADE) fail to capture (a) the temporal variation associated with the rate of spreading of a tracer, and (b) the distribution of early and late arrival times which are often observed in field and/or laboratory scenarios and are considered as the signature of anomalous transport. Elsewhere we have presented exact stochastic moment equations to model tracer transport in randomly heterogeneous aquifers. We have also developed a closure scheme which enables one to provide numerical solutions of such moment equations at different orders of approximations. The resulting (ensemble) average and variance of concentration fields were found to display a good agreement against Monte Carlo - based simulation results for mildly heterogeneous (or well-conditioned strongly heterogeneous) media. Here we explore the ability of the moment equations approach to describe the distribution of early arrival times and late time tailing effects which can be observed in Monte-Carlo based breakthrough curves (BTCs) of the (ensemble) mean concentration. We show that BTCs of mean resident concentration calculated at a fixed space location through higher-order approximations of moment equations display long tailing features of the kind which is typically associated with anomalous transport behavior and are not represented by an ADE model with constant dispersive parameter, such as the zero-order approximation.

  16. Saddlepoint approximation to the distribution of the total distance of the continuous time random walk

    NASA Astrophysics Data System (ADS)

    Gatto, Riccardo

    2017-12-01

    This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  17. Uncertainty analysis for effluent trading planning using a Bayesian estimation-based simulation-optimization modeling approach.

    PubMed

    Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J

    2017-06-01

    In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision alternatives. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Simulation and analysis of scalable non-Gaussian statistically anisotropic random functions

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.

    2015-12-01

    Many earth and environmental (as well as other) variables, Y, and their spatial or temporal increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture some key aspects of such scaling by treating Y or ΔY as standard sub-Gaussian random functions. We were however unable to reconcile two seemingly contradictory observations, namely that whereas sample frequency distributions of Y (or its logarithm) exhibit relatively mild non-Gaussian peaks and tails, those of ΔY display peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we overcame this difficulty by developing a new generalized sub-Gaussian model which captures both behaviors in a unified and consistent manner, exploring it on synthetically generated random functions in one dimension (Riva et al., 2015). Here we extend our generalized sub-Gaussian model to multiple dimensions, present an algorithm to generate corresponding random realizations of statistically isotropic or anisotropic sub-Gaussian functions and illustrate it in two dimensions. We demonstrate the accuracy of our algorithm by comparing ensemble statistics of Y and ΔY (such as, mean, variance, variogram and probability density function) with those of Monte Carlo generated realizations. We end by exploring the feasibility of estimating all relevant parameters of our model by analyzing jointly spatial moments of Y and ΔY obtained from a single realization of Y.

  19. Unlocking hidden genomic sequence

    PubMed Central

    Keith, Jonathan M.; Cochran, Duncan A. E.; Lala, Gita H.; Adams, Peter; Bryant, Darryn; Mitchelson, Keith R.

    2004-01-01

    Despite the success of conventional Sanger sequencing, significant regions of many genomes still present major obstacles to sequencing. Here we propose a novel approach with the potential to alleviate a wide range of sequencing difficulties. The technique involves extracting target DNA sequence from variants generated by introduction of random mutations. The introduction of mutations does not destroy original sequence information, but distributes it amongst multiple variants. Some of these variants lack problematic features of the target and are more amenable to conventional sequencing. The technique has been successfully demonstrated with mutation levels up to an average 18% base substitution and has been used to read previously intractable poly(A), AT-rich and GC-rich motifs. PMID:14973330

  20. Irradiation direction from texture

    NASA Astrophysics Data System (ADS)

    Koenderink, Jan J.; Pont, Sylvia C.

    2003-10-01

    We present a theory of image texture resulting from the shading of corrugated (three-dimensional textured) surfaces, Lambertian on the micro scale, in the domain of geometrical optics. The derivation applies to isotropic Gaussian random surfaces, under collimated illumination, in normal view. The theory predicts the structure tensors from either the gradient or the Hessian of the image intensity and allows inferences of the direction of irradiation of the surface. Although the assumptions appear prima facie rather restrictive, even for surfaces that are not at all Gaussian, with the bidirectional reflectance distribution function far from Lambertian and vignetting and multiple scattering present, we empirically recover the direction of irradiation with an accuracy of a few degrees.

  1. A Case of Transient Pulmonary Interstitial Lesions in Aquaporin-4-positive Neuromyelitis Optica Spectrum Disorder.

    PubMed

    Asato, Yuko; Kamitani, Toshiaki; Ootsuka, Kuniyuki; Kuramochi, Mizuki; Nakanishi, Kozo; Shimada, Tetsuya; Takahashi, Toshiyuki; Misu, Tatsuro; Aoki, Masashi; Fujihara, Kazuo; Kawabata, Yoshinori

    2018-05-18

    We herein report the case of a 76-year old man with aquaporin-4-Immunoglobulin-G (AQP4-IgG)-positive neuromyelitis optica spectrum disorder (NMOSD), in whom transient interstitial pulmonary lesions developed at the early stage of the disease. Chest X-ray showed multiple infiltrative shadows in both upper lung fields, and computed tomography revealed abnormal shadows distributed randomly in the lungs. Surgical lung biopsy showed features of unclassifiable interstitial pneumonia, characterized by various types of air-space organization, which resulted in obscure lung structure. This is the first report to describe the pathological findings of interstitial pneumonia, which may represent a rare extra-central nervous system complication of NMOSD.

  2. Space shuttle solid rocket booster recovery system definition. Volume 2: SRB water impact Monte Carlo computer program, user's manual

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.

  3. Chromosome Rearrangements Recovered following Transformation of Neurospora Crassa

    PubMed Central

    Perkins, D. D.; Kinsey, J. A.; Asch, D. K.; Frederick, G. D.

    1993-01-01

    New chromosome rearrangements were found in 10% or more of mitotically stable transformants. This was shown for transformations involving a variety of different markers, vectors and recipient strains. Breakpoints were randomly distributed among the seven linkage groups. Controls using untransformed protoplasts of the same strains contained almost no rearrangements. A study of molecularly characterized Am(+) transformants showed that rearrangements are frequent when multiple ectopic integration events have occurred. In contrast, rearrangements are absent or infrequent when only the resident locus is restored to am(+) by a homologous event. Sequences of the transforming vector were genetically linked to breakpoints in 6 of 10 translocations that were examined using Southern hybridization or colony blots. PMID:8349106

  4. Motivational Interviewing May Improve Exercise Experience for People with Multiple Sclerosis: A Small Randomized Trial

    ERIC Educational Resources Information Center

    Smith, Douglas C.; Lanesskog, Deirdre; Cleeland, Leah; Motl, Robert; Weikert, Madeline; Dlugonski, Deirdre

    2012-01-01

    People with multiple sclerosis (MS) are likely to benefit from regular exercise, but physical inactivity is more common among people with MS than among the general population. This small randomized study evaluated whether motivational interviewing (MI) affects adherence to and personal experience in an exercise program. Inactive people with MS…

  5. Multiple Imputation of Item Scores in Test and Questionnaire Data, and Influence on Psychometric Results

    ERIC Educational Resources Information Center

    van Ginkel, Joost R.; van der Ark, L. Andries; Sijtsma, Klaas

    2007-01-01

    The performance of five simple multiple imputation methods for dealing with missing data were compared. In addition, random imputation and multivariate normal imputation were used as lower and upper benchmark, respectively. Test data were simulated and item scores were deleted such that they were either missing completely at random, missing at…

  6. A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2010-08-01

    A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.

  7. Continuous Time Random Walks with memory and financial distributions

    NASA Astrophysics Data System (ADS)

    Montero, Miquel; Masoliver, Jaume

    2017-11-01

    We study financial distributions from the perspective of Continuous Time Random Walks with memory. We review some of our previous developments and apply them to financial problems. We also present some new models with memory that can be useful in characterizing tendency effects which are inherent in most markets. We also briefly study the effect on return distributions of fractional behaviors in the distribution of pausing times between successive transactions.

  8. Impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach.

    PubMed

    Chandrasekar, A; Rakkiyappan, R; Cao, Jinde

    2015-10-01

    This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Non-Gaussian Correlations between Reflected and Transmitted Intensity Patterns Emerging from Opaque Disordered Media

    NASA Astrophysics Data System (ADS)

    Starshynov, I.; Paniagua-Diaz, A. M.; Fayard, N.; Goetschy, A.; Pierrat, R.; Carminati, R.; Bertolotti, J.

    2018-04-01

    The propagation of monochromatic light through a scattering medium produces speckle patterns in reflection and transmission, and the apparent randomness of these patterns prevents direct imaging through thick turbid media. Yet, since elastic multiple scattering is fundamentally a linear and deterministic process, information is not lost but distributed among many degrees of freedom that can be resolved and manipulated. Here, we demonstrate experimentally that the reflected and transmitted speckle patterns are robustly correlated, and we unravel all the complex and unexpected features of this fundamentally non-Gaussian and long-range correlation. In particular, we show that it is preserved even for opaque media with thickness much larger than the scattering mean free path, proving that information survives the multiple scattering process and can be recovered. The existence of correlations between the two sides of a scattering medium opens up new possibilities for the control of transmitted light without any feedback from the target side, but using only information gathered from the reflected speckle.

  10. Multiple testing with discrete data: Proportion of true null hypotheses and two adaptive FDR procedures.

    PubMed

    Chen, Xiongzhi; Doerge, Rebecca W; Heyse, Joseph F

    2018-05-11

    We consider multiple testing with false discovery rate (FDR) control when p values have discrete and heterogeneous null distributions. We propose a new estimator of the proportion of true null hypotheses and demonstrate that it is less upwardly biased than Storey's estimator and two other estimators. The new estimator induces two adaptive procedures, that is, an adaptive Benjamini-Hochberg (BH) procedure and an adaptive Benjamini-Hochberg-Heyse (BHH) procedure. We prove that the adaptive BH (aBH) procedure is conservative nonasymptotically. Through simulation studies, we show that these procedures are usually more powerful than their nonadaptive counterparts and that the adaptive BHH procedure is usually more powerful than the aBH procedure and a procedure based on randomized p-value. The adaptive procedures are applied to a study of HIV vaccine efficacy, where they identify more differentially polymorphic positions than the BH procedure at the same FDR level. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Logistic regression of family data from retrospective study designs.

    PubMed

    Whittemore, Alice S; Halpern, Jerry

    2003-11-01

    We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.

  12. Multiple lobes in the far-field distribution of terahertz quantum-cascade lasers due to self-interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Röben, B., E-mail: roeben@pdi-berlin.de; Wienold, M.; Schrottke, L.

    2016-06-15

    The far-field distribution of the emission intensity of terahertz (THz) quantum-cascade lasers (QCLs) frequently exhibits multiple lobes instead of a single-lobed Gaussian distribution. We show that such multiple lobes can result from self-interference related to the typically large beam divergence of THz QCLs and the presence of an inevitable cryogenic operation environment including optical windows. We develop a quantitative model to reproduce the multiple lobes. We also demonstrate how a single-lobed far-field distribution can be achieved.

  13. The Miniaturization of the AFIT Random Noise Radar

    DTIC Science & Technology

    2013-03-01

    RANDOM NOISE RADAR I. Introduction Recent advances in technology and signal processing techniques have opened thedoor to using an ultra-wide band random...AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training

  14. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  15. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise

    NASA Astrophysics Data System (ADS)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  16. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    PubMed

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  17. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    NASA Astrophysics Data System (ADS)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  18. A statistical model of false negative and false positive detection of phase singularities.

    PubMed

    Jacquemet, Vincent

    2017-10-01

    The complexity of cardiac fibrillation dynamics can be assessed by analyzing the distribution of phase singularities (PSs) observed using mapping systems. Interelectrode distance, however, limits the accuracy of PS detection. To investigate in a theoretical framework the PS false negative and false positive rates in relation to the characteristics of the mapping system and fibrillation dynamics, we propose a statistical model of phase maps with controllable number and locations of PSs. In this model, phase maps are generated from randomly distributed PSs with physiologically-plausible directions of rotation. Noise and distortion of the phase are added. PSs are detected using topological charge contour integrals on regular grids of varying resolutions. Over 100 × 10 6 realizations of the random field process are used to estimate average false negative and false positive rates using a Monte-Carlo approach. The false detection rates are shown to depend on the average distance between neighboring PSs expressed in units of interelectrode distance, following approximately a power law with exponents in the range of 1.14 to 2 for false negatives and around 2.8 for false positives. In the presence of noise or distortion of phase, false detection rates at high resolution tend to a non-zero noise-dependent lower bound. This model provides an easy-to-implement tool for benchmarking PS detection algorithms over a broad range of configurations with multiple PSs.

  19. A Statistical Method for Synthesizing Mediation Analyses Using the Product of Coefficient Approach Across Multiple Trials

    PubMed Central

    Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks

    2016-01-01

    Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330

  20. Programmable quantum random number generator without postprocessing.

    PubMed

    Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping

    2018-02-15

    We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.

  1. The Shark Random Swim - (Lévy Flight with Memory)

    NASA Astrophysics Data System (ADS)

    Businger, Silvia

    2018-05-01

    The Elephant Random Walk (ERW), first introduced by Schütz and Trimper (Phys Rev E 70:045101, 2004), is a one-dimensional simple random walk on Z having a memory about the whole past. We study the Shark Random Swim, a random walk with memory about the whole past, whose steps are α -stable distributed with α \\in (0,2] . Our aim in this work is to study the impact of the heavy tailed step distributions on the asymptotic behavior of the random walk. We shall see that, as for the ERW, the asymptotic behavior of the Shark Random Swim depends on its memory parameter p, and that a phase transition can be observed at the critical value p=1/α.

  2. Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures

    NASA Astrophysics Data System (ADS)

    Dettmann, Carl P.

    2018-05-01

    Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.

  3. Optimal random search for a single hidden target.

    PubMed

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  4. The Extent of Disorder among Charge-balancing Cations in Silicate Glasses and Melts: Spectroscopic Analysis and ab initio Molecular Orbital Calculations

    NASA Astrophysics Data System (ADS)

    Lee, S.; Doyle, C. S.; Stebbins, J. F.

    2001-12-01

    Aluminosilicate melts are one of the dominant components in upper mantle and crust. Essential to the thermodynamic and transport properties of these systems is the full understanding on the atomic arrangements and the extent of disorder. Recent quantification of the extent of disorder among 'framework cations' in silicate melts using NMR provided improved prospects on the atomic structure of the glasses and melt and their corresponding properties and allowed the degree of randomness to be evaluated in terms of the degree of Al-avoidance (Q) and degree of phase separations (P) (Lee and Stebbins, J. Phys. Chem. B 104, 4091; Lee and Stebbins, GCA in press). Quantitative estimation of the extent of disorder among 'charge-balancing cations' including Na in aluminosilicate glasses, however, has remained an unsolved problem and these cations have often been assumed to be randomly distributed. Here, we explore the intermediate range order around Na in charge-balanced aluminosilicate glasses using Na-23 NMR and Near-edge X-ray absorption fine structure (NEXAFS) with full multiple scattering (FMS) simulations combined with ab initio molecular orbital calculations. We also quantify the extent of disorder in charge balancing cations as a function of Na-O bond length (d(Na-O)) distribution with composition and present a structural model favoring ordered Na distributions. Peak position in Na-23 magic angle spinning (MAS) spectra of aluminosilicate glasses with varying R (Si/Al) at 14.1 T varies from -10.28 ppm (R = 0.7) to -19.98 ppm (R = 6). These results suggest that average d(Na-O) increases with increasing R, which is confirmed by Na-23 multiple quantum MAS spectra where the chemical shift moves toward lower frequency with increasing Si and shows the individual Gaussian components of Na-O distributions such as Na-(Al-O-Al), Na-(Si-O-Al) and Na-(Si-O-Si). Calculated d(Na-(Al-O-Al)) of 2.57 Å is shorter than d(Na-(Si-O-Si)) of 2.88 Å. Strong compositional dependence is further manifested in Na K-edge NEXAFS spectra for aluminosilicate glasses that are characterized by two main peaks at about 1057 ev (A) and 1062 ev (B). The intensity ratio between peak A and B increases with increasing R, which is consistent with our FMS simulations of model clusters with R and implies that the Na has rather well ordered oxygen coordination and Na-O distribution depends on the types of nearby framework cations. The potential energy surfaces for model six-member rings (NaAl2Si4O6(OH)12, with and without Al-O-Al) were calculated using ab initio calculations at the HF/6-311G(d) level in order to investigate the equilibrium atomic configurations around Na. The results manifest the varying bonding preference of Na to different framework oxygens. Na is located at single deep and narrow basin in potential energy surfaces. The motion of Na is therefore restricted to near equilibrium position even at higher temperature contrary to conventional random distribution model with moderate Na mobility, demonstrating that dynamics of Na should be associated with the collective motions of framework cations and oxygens. In this study, we provide new insights into the nature of disorder in charge-balancing cations in silicate glasses using spectroscopy combined with simulations, highlighting more complete, atomic-level understanding on the dynamic processes in silicate magmas.

  5. Supporting Students in Learning with Multiple Representation to Improve Student Mental Models on Atomic Structure Concepts

    ERIC Educational Resources Information Center

    Sunyono; Yuanita, L.; Ibrahim, M.

    2015-01-01

    The aim of this research is identify the effectiveness of a multiple representation-based learning model, which builds a mental model within the concept of atomic structure. The research sample of 108 students in 3 classes is obtained randomly from among students of Mathematics and Science Education Studies using a stratified random sampling…

  6. Examining the Missing Completely at Random Mechanism in Incomplete Data Sets: A Multiple Testing Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko; Lichtenberg, Peter A.; Paulson, Daniel

    2012-01-01

    A multiple testing procedure for examining implications of the missing completely at random (MCAR) mechanism in incomplete data sets is discussed. The approach uses the false discovery rate concept and is concerned with testing group differences on a set of variables. The method can be used for ascertaining violations of MCAR and disproving this…

  7. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Englander, Arnold C.

    2014-01-01

    Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.

  8. Optimal sampling with prior information of the image geometry in microfluidic MRI.

    PubMed

    Han, S H; Cho, H; Paulsen, J L

    2015-03-01

    Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants.

    PubMed

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-04-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.

  10. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants

    PubMed Central

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-01-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325

  11. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  12. The triglyceride composition of 17 seed fats rich in octanoic, decanoic, or lauric acid.

    PubMed

    Litchfield, C; Miller, E; Harlow, R D; Reiser, R

    1967-07-01

    Seed fats of eight species ofLauraceae (laurel family), six species ofCuphea (Lythraceae family), and three species ofUlmaceae (elm family) were extracted, and the triglycerides were isolated by preparative thin-layer chromatography. GLC of the triglycerides on a silicone column resolved 10 to 18 peaks with a 22 to 58 carbon number range for each fat. These carbon number distributions yielded considerable information about triglyceride compositions of the fats.The most interesting finding was withLaurus nobilis seed fat, which contained 58.4% lauric acid and 29.2-29.8% trilaurin. A maximum of 19.9% trilaurin would be predicted by a 1, 2, 3-random, a 1, 3-random-2-random, or a 1-random-2-random-3-random distribution of the lauric acid(3). This indicates a specificity for the biosynthesis of a simple triglyceride byLaurus nobilis seed enzymes.Cuphea lanceolata seed fat also contained more simple triglyceride (tridecanoin) than would be predicted by the fatty acid distribution theories.

  13. Spatial Variability of Soil-Water Storage in the Southern Sierra Critical Zone Observatory: Measurement and Prediction

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Bales, R. C.; Zheng, Z.; Glaser, S. D.

    2017-12-01

    Predicting the spatial distribution of soil moisture in mountain environments is confounded by multiple factors, including complex topography, spatial variably of soil texture, sub-surface flow paths, and snow-soil interactions. While remote-sensing tools such as passive-microwave monitoring can measure spatial variability of soil moisture, they only capture near-surface soil layers. Large-scale sensor networks are increasingly providing soil-moisture measurements at high temporal resolution across a broader range of depths than are accessible from remote sensing. It may be possible to combine these in-situ measurements with high-resolution LIDAR topography and canopy cover to estimate the spatial distribution of soil moisture at high spatial resolution at multiple depths. We study the feasibility of this approach using six years (2009-2014) of daily volumetric water content measurements at 10-, 30-, and 60-cm depths from the Southern Sierra Critical Zone Observatory. A non-parametric, multivariate regression algorithm, Random Forest, was used to predict the spatial distribution of depth-integrated soil-water storage, based on the in-situ measurements and a combination of node attributes (topographic wetness, northness, elevation, soil texture, and location with respect to canopy cover). We observe predictable patterns of predictor accuracy and independent variable ranking during the six-year study period. Predictor accuracy is highest during the snow-cover and early recession periods but declines during the dry period. Soil texture has consistently high feature importance. Other landscape attributes exhibit seasonal trends: northness peaks during the wet-up period, and elevation and topographic-wetness index peak during the recession and dry period, respectively.

  14. Evaluating the validity of multiple imputation for missing physiological data in the national trauma data bank.

    PubMed

    Moore, Lynne; Hanley, James A; Lavoie, André; Turgeon, Alexis

    2009-05-01

    The National Trauma Data Bank (NTDB) is plagued by the problem of missing physiological data. The Glasgow Coma Scale score, Respiratory Rate and Systolic Blood Pressure are an essential part of risk adjustment strategies for trauma system evaluation and clinical research. Missing data on these variables may compromise the feasibility and the validity of trauma group comparisons. To evaluate the validity of Multiple Imputation (MI) for completing missing physiological data in the National Trauma Data Bank (NTDB), by assessing the impact of MI on 1) frequency distributions, 2) associations with mortality, and 3) risk adjustment. Analyses were based on 170,956 NTDB observations with complete physiological data (observed data set). Missing physiological data were artificially imposed on this data set and then imputed using MI (MI data set). To assess the impact of MI on risk adjustment, 100 pairs of hospitals were randomly selected with replacement and compared using adjusted Odds Ratios (OR) of mortality. OR generated by the observed data set were then compared to those generated by the MI data set. Frequency distributions and associations with mortality were preserved following MI. The median absolute difference between adjusted OR of mortality generated by the observed data set and by the MI data set was 3.6% (inter-quartile range: 2.4%-6.1%). This study suggests that, provided it is implemented with care, MI of missing physiological data in the NTDB leads to valid frequency distributions, preserves associations with mortality, and does not compromise risk adjustment in inter-hospital comparisons of mortality.

  15. Double-blind evaluation of the safety and pharmacokinetics of multiple oral once-daily 750-milligram and 1-gram doses of levofloxacin in healthy volunteers.

    PubMed

    Chien, S C; Wong, F A; Fowler, C L; Callery-D'Amico, S V; Williams, R R; Nayak, R; Chow, A T

    1998-04-01

    The safety and pharmacokinetics of once-daily oral levofloxacin in 16 healthy male volunteers were investigated in a randomized, double-blind, placebo-controlled study. Subjects were randomly assigned to the treatment (n = 10) or placebo group (n = 6). In study period 1, 750 mg of levofloxacin or a placebo was administered orally as a single dose on day 1, followed by a washout period on days 2 and 3; dosing resumed for days 4 to 10. Following a 3-day washout period, 1 g of levofloxacin or a placebo was administered in a similar fashion in period 2. Plasma and urine levofloxacin concentrations were measured by high-pressure liquid chromatography. Pharmacokinetic parameters were estimated by model-independent methods. Levofloxacin was rapidly absorbed after single and multiple once-daily 750-mg and 1-g doses with an apparently large volume of distribution. Peak plasma levofloxacin concentration (Cmax) values were generally attained within 2 h postdose. The mean values of Cmax and area under the concentration-time curve from 0 to 24 h (AUC0-24) following a single 750-mg dose were 7.1 microg/ml and 71.3 microg x h/ml, respectively, compared to 8.6 microg/ml and 90.7 microg x h/ml, respectively, at steady state. Following the single 1-g dose, mean Cmax and AUC0-24 values were 8.9 microg/ml and 95.4 microg x h/ml, respectively; corresponding values at steady state were 11.8 microg/ml and 118 microg x h/ml. These Cmax and AUC0-24 values indicate modest and similar degrees of accumulation upon multiple dosing at the two dose levels. Values of apparent total body clearance (CL/F), apparent volume of distribution (Vss/F), half-life (t1/2), and renal clearance (CL[R]) were similar for the two dose levels and did not vary from single to multiple dosing. Mean steady-state values for CL/F, Vss/F, t1/2, and CL(R) following 750 mg of levofloxacin were 143 ml/min, 100 liters, 8.8 h, and 116 ml/min, respectively; corresponding values for the 1-g dose were 146 ml/min, 105 liters, 8.9 h, and 105 ml/min. In general, the pharmacokinetics of levofloxacin in healthy subjects following 750-mg and 1-g single and multiple once-daily oral doses appear to be consistent with those found in previous studies of healthy volunteers given 500-mg doses. Levofloxacin was well tolerated at either high dose level. The most frequently reported drug-related adverse events were nausea and headache.

  16. Double-Blind Evaluation of the Safety and Pharmacokinetics of Multiple Oral Once-Daily 750-Milligram and 1-Gram Doses of Levofloxacin in Healthy Volunteers

    PubMed Central

    Chien, Shu-Chean; Wong, Frank A.; Fowler, Cynthia L.; Callery-D’Amico, Susan V.; Williams, R. Rex; Nayak, Ramchandra; Chow, Andrew T.

    1998-01-01

    The safety and pharmacokinetics of once-daily oral levofloxacin in 16 healthy male volunteers were investigated in a randomized, double-blind, placebo-controlled study. Subjects were randomly assigned to the treatment (n = 10) or placebo group (n = 6). In study period 1, 750 mg of levofloxacin or a placebo was administered orally as a single dose on day 1, followed by a washout period on days 2 and 3; dosing resumed for days 4 to 10. Following a 3-day washout period, 1 g of levofloxacin or a placebo was administered in a similar fashion in period 2. Plasma and urine levofloxacin concentrations were measured by high-pressure liquid chromatography. Pharmacokinetic parameters were estimated by model-independent methods. Levofloxacin was rapidly absorbed after single and multiple once-daily 750-mg and 1-g doses with an apparently large volume of distribution. Peak plasma levofloxacin concentration (Cmax) values were generally attained within 2 h postdose. The mean values of Cmax and area under the concentration-time curve from 0 to 24 h (AUC0–24) following a single 750-mg dose were 7.1 μg/ml and 71.3 μg · h/ml, respectively, compared to 8.6 μg/ml and 90.7 μg · h/ml, respectively, at steady state. Following the single 1-g dose, mean Cmax and AUC0–24 values were 8.9 μg/ml and 95.4 μg · h/ml, respectively; corresponding values at steady state were 11.8 μg/ml and 118 μg · h/ml. These Cmax and AUC0–24 values indicate modest and similar degrees of accumulation upon multiple dosing at the two dose levels. Values of apparent total body clearance (CL/F), apparent volume of distribution (Vss/F), half-life (t1/2), and renal clearance (CLR) were similar for the two dose levels and did not vary from single to multiple dosing. Mean steady-state values for CL/F, Vss/F, t1/2, and CLR following 750 mg of levofloxacin were 143 ml/min, 100 liters, 8.8 h, and 116 ml/min, respectively; corresponding values for the 1-g dose were 146 ml/min, 105 liters, 8.9 h, and 105 ml/min. In general, the pharmacokinetics of levofloxacin in healthy subjects following 750-mg and 1-g single and multiple once-daily oral doses appear to be consistent with those found in previous studies of healthy volunteers given 500-mg doses. Levofloxacin was well tolerated at either high dose level. The most frequently reported drug-related adverse events were nausea and headache. PMID:9559801

  17. Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability

    NASA Astrophysics Data System (ADS)

    Kar, Soummya; Moura, José M. F.

    2011-04-01

    The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.

  18. Efficient sampling of complex network with modified random walk strategies

    NASA Astrophysics Data System (ADS)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  19. Multiple-Ring Digital Communication Network

    NASA Technical Reports Server (NTRS)

    Kirkham, Harold

    1992-01-01

    Optical-fiber digital communication network to support data-acquisition and control functions of electric-power-distribution networks. Optical-fiber links of communication network follow power-distribution routes. Since fiber crosses open power switches, communication network includes multiple interconnected loops with occasional spurs. At each intersection node is needed. Nodes of communication network include power-distribution substations and power-controlling units. In addition to serving data acquisition and control functions, each node acts as repeater, passing on messages to next node(s). Multiple-ring communication network operates on new AbNET protocol and features fiber-optic communication.

  20. Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, B. M.; Milligan, M.

    2011-03-01

    In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.

  1. NHash: Randomized N-Gram Hashing for Distributed Generation of Validatable Unique Study Identifiers in Multicenter Research.

    PubMed

    Zhang, Guo-Qiang; Tao, Shiqiang; Xing, Guangming; Mozes, Jeno; Zonjy, Bilal; Lhatoo, Samden D; Cui, Licong

    2015-11-10

    A unique study identifier serves as a key for linking research data about a study subject without revealing protected health information in the identifier. While sufficient for single-site and limited-scale studies, the use of common unique study identifiers has several drawbacks for large multicenter studies, where thousands of research participants may be recruited from multiple sites. An important property of study identifiers is error tolerance (or validatable), in that inadvertent editing mistakes during their transmission and use will most likely result in invalid study identifiers. This paper introduces a novel method called "Randomized N-gram Hashing (NHash)," for generating unique study identifiers in a distributed and validatable fashion, in multicenter research. NHash has a unique set of properties: (1) it is a pseudonym serving the purpose of linking research data about a study participant for research purposes; (2) it can be generated automatically in a completely distributed fashion with virtually no risk for identifier collision; (3) it incorporates a set of cryptographic hash functions based on N-grams, with a combination of additional encryption techniques such as a shift cipher; (d) it is validatable (error tolerant) in the sense that inadvertent edit errors will mostly result in invalid identifiers. NHash consists of 2 phases. First, an intermediate string using randomized N-gram hashing is generated. This string consists of a collection of N-gram hashes f1, f2, ..., fk. The input for each function fi has 3 components: a random number r, an integer n, and input data m. The result, fi(r, n, m), is an n-gram of m with a starting position s, which is computed as (r mod |m|), where |m| represents the length of m. The output for Step 1 is the concatenation of the sequence f1(r1, n1, m1), f2(r2, n2, m2), ..., fk(rk, nk, mk). In the second phase, the intermediate string generated in Phase 1 is encrypted using techniques such as shift cipher. The result of the encryption, concatenated with the random number r, is the final NHash study identifier. We performed experiments using a large synthesized dataset comparing NHash with random strings, and demonstrated neglegible probability for collision. We implemented NHash for the Center for SUDEP Research (CSR), a National Institute for Neurological Disorders and Stroke-funded Center Without Walls for Collaborative Research in the Epilepsies. This multicenter collaboration involves 14 institutions across the United States and Europe, bringing together extensive and diverse expertise to understand sudden unexpected death in epilepsy patients (SUDEP). The CSR Data Repository has successfully used NHash to link deidentified multimodal clinical data collected in participating CSR institutions, meeting all desired objectives of NHash.

  2. Mechanism of Muong Nong-type tektite formation and speculation on the source of Australasian tektites

    NASA Technical Reports Server (NTRS)

    Schnetzler, C. C.

    1992-01-01

    The source crater of the youngest and largest of the tektite strewnfields, the Australasian strewnfield, has not been located. A number of lines of evidence indicate that the Muong Nong-type tektites, primarily found in Indochina, are more primitive than the much more abundant and widespread splash-form tektites, and are proximal to the source. In this study the spatial distribution of Muong Nong-type tektite sites and chemical character have been used to indicate the approximate location of the source. The variation of Muong Nong-type tektite chemical composition appears to be caused by mixing of two silicate rock end-members and a small amount of limestone, and not by vapor fractionation. The variation in composition is not random, and does not support in situ melting or multiple impact theories. The distribution of both Muong Nong and splash-form tektite sites suggest the source is in a limited area near the southern part of the Thailand-Laos border.

  3. Occurrence of four types of growth hormone-related dwarfism in Israeli communities.

    PubMed

    Adam, A; Josefsberg, Z; Pertzelan, A; Zadik, Z; Chemke, J M; Laron, Z

    1981-09-01

    Data are presented on 121 dwarfed patients belonging to 98 Israeli families with 4 types of dwarfism related to deficiency or in activity of human growth hormone (hGH): isolated growth hormone deficiency (IGHD), partial hGH deficiency (pIGHD), multiple pituitary hormone deficiencies (MPHD) and Laron-type dwarfism (LTD). These series are believed to comprise most of the dwarfs of these types in Israel; however, their distribution among the various ethnic communities varies greatly: LTD (15 families) is confined to a few communities of Oriental Jews; only 3 of 30 families with IGHD, but 7 of 10 families with pIGHD, are Ashkenazi, whereas the clinic distribution of 42 families with MPHD corresponds roughly to that of the general population. These data seem to reflect the role of genetic factors in the etiology of the various types of dwarfism: the stronger the genetic component, the greater is its deviation from random occurrence among the various communities.

  4. Interarrival times of message propagation on directed networks.

    PubMed

    Mihaljev, Tamara; de Arcangelis, Lucilla; Herrmann, Hans J

    2011-08-01

    One of the challenges in fighting cybercrime is to understand the dynamics of message propagation on botnets, networks of infected computers used to send viruses, unsolicited commercial emails (SPAM) or denial of service attacks. We map this problem to the propagation of multiple random walkers on directed networks and we evaluate the interarrival time distribution between successive walkers arriving at a target. We show that the temporal organization of this process, which models information propagation on unstructured peer to peer networks, has the same features as SPAM reaching a single user. We study the behavior of the message interarrival time distribution on three different network topologies using two different rules for sending messages. In all networks the propagation is not a pure Poisson process. It shows universal features on Poissonian networks and a more complex behavior on scale free networks. Results open the possibility to indirectly learn about the process of sending messages on networks with unknown topologies, by studying interarrival times at any node of the network.

  5. Interarrival times of message propagation on directed networks

    NASA Astrophysics Data System (ADS)

    Mihaljev, Tamara; de Arcangelis, Lucilla; Herrmann, Hans J.

    2011-08-01

    One of the challenges in fighting cybercrime is to understand the dynamics of message propagation on botnets, networks of infected computers used to send viruses, unsolicited commercial emails (SPAM) or denial of service attacks. We map this problem to the propagation of multiple random walkers on directed networks and we evaluate the interarrival time distribution between successive walkers arriving at a target. We show that the temporal organization of this process, which models information propagation on unstructured peer to peer networks, has the same features as SPAM reaching a single user. We study the behavior of the message interarrival time distribution on three different network topologies using two different rules for sending messages. In all networks the propagation is not a pure Poisson process. It shows universal features on Poissonian networks and a more complex behavior on scale free networks. Results open the possibility to indirectly learn about the process of sending messages on networks with unknown topologies, by studying interarrival times at any node of the network.

  6. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  7. Probabilistic SSME blades structural response under random pulse loading

    NASA Technical Reports Server (NTRS)

    Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.

    1987-01-01

    The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.

  8. New Quantum Key Distribution Scheme Based on Random Hybrid Quantum Channel with EPR Pairs and GHZ States

    NASA Astrophysics Data System (ADS)

    Yan, Xing-Yu; Gong, Li-Hua; Chen, Hua-Ying; Zhou, Nan-Run

    2018-05-01

    A theoretical quantum key distribution scheme based on random hybrid quantum channel with EPR pairs and GHZ states is devised. In this scheme, EPR pairs and tripartite GHZ states are exploited to set up random hybrid quantum channel. Only one photon in each entangled state is necessary to run forth and back in the channel. The security of the quantum key distribution scheme is guaranteed by more than one round of eavesdropping check procedures. It is of high capacity since one particle could carry more than two bits of information via quantum dense coding.

  9. Atom probe study of vanadium interphase precipitates and randomly distributed vanadium precipitates in ferrite.

    PubMed

    Nöhrer, M; Zamberger, S; Primig, S; Leitner, H

    2013-01-01

    Atom probe tomography and transmission electron microscopy were used to examine the precipitation reaction in the austenite and ferrite phases in vanadium micro-alloyed steel after a thermo-mechanical process. It was observed that only in the ferrite phase precipitates could be found, whereupon two different types were detected. Thus, the aim was to reveal the difference between these two types. The first type was randomly distributed precipitates from V supersaturated ferrite and the second type V interphase precipitates. Not only the arrangement of the particles was different also the chemical composition. The randomly distributed precipitates consisted of V, C and N in contrast to that the interphase precipitates showed a composition of V, C and Mn. Furthermore the randomly distributed precipitates had maximum size of 20 nm and the interphase precipitates a maximum size of 15 nm. It was assumed that the reason for these differences is caused by the site in which they were formed. The randomly distributed precipitates were formed in a matrix consisting mainly of 0.05 at% C, 0.68 at% Si, 0.03 at% N, 0.145 at% V and 1.51 at% Mn. The interphase precipitates were formed in a region with a much higher C, Mn and V content. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  11. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  12. Willingness to pay for community-based ivermectin distribution: a study of three onchocerciasis-endemic communities in Nigeria.

    PubMed

    Onwujekwe, O E; Shu, E N; Nwagbo, D; Akpala, C O; Okonkwo, P O

    1998-10-01

    To determine the willingness to pay (WTP) for local ivermectin distribution in a community financing framework. Contingent valuation in three communities in Nigeria, using randomly selected household heads. WTP was elicited using a bidding game, and for collecting information on the households' socio-economic status, level of knowledge, priority ranking and perception of risk of contracting the disease, structured questionnaires were used. Ordinary least squares (OLS) multiple regression analysis was used to analyse factors associated with WTP. Between 92.1% and 93.3 % of respondents were willing to pay amounts ranging from 5 Naira (US$ 0.06) to 100 Naira (US$ 1.25) (median: 20 Naira, US$ 0.25) in the three communities, more than three times the modelled unit direct cost of distributing ivermectin by the communities themselves. Occupation of the respondent, marital status, average monthly expenditure on health care, manifestations of onchocerciasis, the type of savings scheme embarked on by the respondent, age-group, level of education and type of property were statistically significant (P < 0.05) variables affecting WTP. This study shows that there is WTP for local ivermectin distribution in the three study communities, and that it should be assessed before instituting community-directed treatment with ivermectin.

  13. Achieving a strongly negative scattering asymmetry factor in random media composed of dual-dipolar particles

    NASA Astrophysics Data System (ADS)

    Wang, B. X.; Zhao, C. Y.

    2018-02-01

    Understanding radiative transfer in random media like micro- or nanoporous and particulate materials, allows people to manipulate the scattering and absorption of radiation, as well as opens new possibilities in applications such as imaging through turbid media, photovoltaics, and radiative cooling. A strong-backscattering phase function, i.e., a negative scattering asymmetry parameter g , is of great interest, which can possibly lead to unusual radiative transport phenomena, for instance, Anderson localization of light. Here we demonstrate that by utilizing the structural correlations and second Kerker condition for a disordered medium composed of randomly distributed silicon nanoparticles, a strongly negative scattering asymmetry factor (g ˜-0.5 ) for multiple light scattering can be realized in the near infrared. Based on the multipole expansion of Foldy-Lax equations and quasicrystalline approximation (QCA), we have rigorously derived analytical expressions for the effective propagation constant and scattering phase function for a random system containing spherical particles, by taking the effect of structural correlations into account. We show that as the concentration of scattering particles rises, the backscattering is also enhanced. Moreover, in this circumstance, the transport mean free path is largely reduced and even becomes smaller than that predicted by independent scattering approximation. We further explore the dependent scattering effects, including the modification of electric and magnetic dipole excitations and far-field interference effect, both induced and influenced by the structural correlations, for volume fraction of particles up to fv˜0.25 . Our results have profound implications in harnessing micro- or nanoscale radiative transfer through random media.

  14. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    NASA Astrophysics Data System (ADS)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  15. Ultrafast energy relaxation in single light-harvesting complexes

    DOE PAGES

    Maly, Pavel; Gruber, J. Michael; Cogdell, Richard J.; ...

    2016-02-22

    Energy relaxation in light-harvesting complexes has been extensively studied by various ultrafast spectroscopic techniques, the fastest processes being in the sub–100-fs range. At the same time, much slower dynamics have been observed in individual complexes by single-molecule fluorescence spectroscopy (SMS). In this work, we use a pump–probe-type SMS technique to observe the ultrafast energy relaxation in single light-harvesting complexes LH2 of purple bacteria. After excitation at 800 nm, the measured relaxation time distribution of multiple complexes has a peak at 95 fs and is asymmetric, with a tail at slower relaxation times. When tuning the excitation wavelength, the distribution changesmore » in both its shape and position. The observed behavior agrees with what is to be expected from the LH2 excited states structure. As we show by a Redfield theory calculation of the relaxation times, the distribution shape corresponds to the expected effect of Gaussian disorder of the pigment transition energies. By repeatedly measuring few individual complexes for minutes, we find that complexes sample the relaxation time distribution on a timescale of seconds. Furthermore, by comparing the distribution from a single long-lived complex with the whole ensemble, we demonstrate that, regarding the relaxation times, the ensemble can be considered ergodic. Lastly, our findings thus agree with the commonly used notion of an ensemble of identical LH2 complexes experiencing slow random fluctuations.« less

  16. Ultrafast energy relaxation in single light-harvesting complexes.

    PubMed

    Malý, Pavel; Gruber, J Michael; Cogdell, Richard J; Mančal, Tomáš; van Grondelle, Rienk

    2016-03-15

    Energy relaxation in light-harvesting complexes has been extensively studied by various ultrafast spectroscopic techniques, the fastest processes being in the sub-100-fs range. At the same time, much slower dynamics have been observed in individual complexes by single-molecule fluorescence spectroscopy (SMS). In this work, we use a pump-probe-type SMS technique to observe the ultrafast energy relaxation in single light-harvesting complexes LH2 of purple bacteria. After excitation at 800 nm, the measured relaxation time distribution of multiple complexes has a peak at 95 fs and is asymmetric, with a tail at slower relaxation times. When tuning the excitation wavelength, the distribution changes in both its shape and position. The observed behavior agrees with what is to be expected from the LH2 excited states structure. As we show by a Redfield theory calculation of the relaxation times, the distribution shape corresponds to the expected effect of Gaussian disorder of the pigment transition energies. By repeatedly measuring few individual complexes for minutes, we find that complexes sample the relaxation time distribution on a timescale of seconds. Furthermore, by comparing the distribution from a single long-lived complex with the whole ensemble, we demonstrate that, regarding the relaxation times, the ensemble can be considered ergodic. Our findings thus agree with the commonly used notion of an ensemble of identical LH2 complexes experiencing slow random fluctuations.

  17. Ultrafast energy relaxation in single light-harvesting complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maly, Pavel; Gruber, J. Michael; Cogdell, Richard J.

    Energy relaxation in light-harvesting complexes has been extensively studied by various ultrafast spectroscopic techniques, the fastest processes being in the sub–100-fs range. At the same time, much slower dynamics have been observed in individual complexes by single-molecule fluorescence spectroscopy (SMS). In this work, we use a pump–probe-type SMS technique to observe the ultrafast energy relaxation in single light-harvesting complexes LH2 of purple bacteria. After excitation at 800 nm, the measured relaxation time distribution of multiple complexes has a peak at 95 fs and is asymmetric, with a tail at slower relaxation times. When tuning the excitation wavelength, the distribution changesmore » in both its shape and position. The observed behavior agrees with what is to be expected from the LH2 excited states structure. As we show by a Redfield theory calculation of the relaxation times, the distribution shape corresponds to the expected effect of Gaussian disorder of the pigment transition energies. By repeatedly measuring few individual complexes for minutes, we find that complexes sample the relaxation time distribution on a timescale of seconds. Furthermore, by comparing the distribution from a single long-lived complex with the whole ensemble, we demonstrate that, regarding the relaxation times, the ensemble can be considered ergodic. Lastly, our findings thus agree with the commonly used notion of an ensemble of identical LH2 complexes experiencing slow random fluctuations.« less

  18. Theoretical size distribution of fossil taxa: analysis of a null model.

    PubMed

    Reed, William J; Hughes, Barry D

    2007-03-22

    This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.

  19. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography.

    PubMed

    Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-06-01

    Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.

  20. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography

    PubMed Central

    Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-01-01

    Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477

  1. Negative probability of random multiplier in turbulence

    NASA Astrophysics Data System (ADS)

    Bai, Xuan; Su, Weidong

    2017-11-01

    The random multiplicative process (RMP), which has been proposed for over 50 years, is a convenient phenomenological ansatz of turbulence cascade. In the RMP, the fluctuation in a large scale is statistically mapped to the one in a small scale by the linear action of an independent random multiplier (RM). Simple as it is, the RMP is powerful enough since all of the known scaling laws can be included in this model. So far as we know, however, a direct extraction for the probability density function (PDF) of RM has been absent yet. The reason is the deconvolution during the process is ill-posed. Nevertheless, with the progress in the studies of inverse problems, the situation can be changed. By using some new regularization techniques, for the first time we recover the PDFs of the RMs in some turbulent flows. All the consistent results from various methods point to an amazing observation-the PDFs can attain negative values in some intervals; and this can also be justified by some properties of infinitely divisible distributions. Despite the conceptual unconventionality, the present study illustrates the implications of negative probability in turbulence in several aspects, with emphasis on its role in describing the interaction between fluctuations at different scales. This work is supported by the NSFC (No. 11221062 and No. 11521091).

  2. Exploring the existence of a stayer population with mover-stayer counting process models: application to joint damage in psoriatic arthritis.

    PubMed

    Yiu, Sean; Farewell, Vernon T; Tom, Brian D M

    2017-08-01

    Many psoriatic arthritis patients do not progress to permanent joint damage in any of the 28 hand joints, even under prolonged follow-up. This has led several researchers to fit models that estimate the proportion of stayers (those who do not have the propensity to experience the event of interest) and to characterize the rate of developing damaged joints in the movers (those who have the propensity to experience the event of interest). However, when fitted to the same data, the paper demonstrates that the choice of model for the movers can lead to widely varying conclusions on a stayer population, thus implying that, if interest lies in a stayer population, a single analysis should not generally be adopted. The aim of the paper is to provide greater understanding regarding estimation of a stayer population by comparing the inferences, performance and features of multiple fitted models to real and simulated data sets. The models for the movers are based on Poisson processes with patient level random effects and/or dynamic covariates, which are used to induce within-patient correlation, and observation level random effects are used to account for time varying unobserved heterogeneity. The gamma, inverse Gaussian and compound Poisson distributions are considered for the random effects.

  3. Two Models of Time Constrained Target Travel between Two Endpoints Constructed by the Application of Brownian Motion and a Random Tour.

    DTIC Science & Technology

    1983-03-01

    the Naval Postgraduate School. As my *advisor, Prof. Gaver suggested and derived the Brownian bridge, as well as nudged me in the right direction when...the * random tour process by deriving the mean square radial distance for a random tour with arbitrary course change distribution to be: EECR I2(V / 2...random tour model, li = Iy = 8, and equation (3)x y results as expected. The notion of an arbitrary course change distribution is important because the

  4. Anonymous authenticated communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaver, Cheryl L; Schroeppel, Richard C; Snyder, Lillian A

    2007-06-19

    A method of performing electronic communications between members of a group wherein the communications are authenticated as being from a member of the group and have not been altered, comprising: generating a plurality of random numbers; distributing in a digital medium the plurality of random numbers to the members of the group; publishing a hash value of contents of the digital medium; distributing to the members of the group public-key-encrypted messages each containing a same token comprising a random number; and encrypting a message with a key generated from the token and the plurality of random numbers.

  5. Return probabilities and hitting times of random walks on sparse Erdös-Rényi graphs.

    PubMed

    Martin, O C; Sulc, P

    2010-03-01

    We consider random walks on random graphs, focusing on return probabilities and hitting times for sparse Erdös-Rényi graphs. Using the tree approach, which is expected to be exact in the large graph limit, we show how to solve for the distribution of these quantities and we find that these distributions exhibit a form of self-similarity.

  6. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    NASA Astrophysics Data System (ADS)

    Wen, Fang-Qing; Zhang, Gong; Ben, De

    2015-11-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61071163, 61271327, and 61471191), the Funding for Outstanding Doctoral Dissertation in Nanjing University of Aeronautics and Astronautics, China (Grant No. BCXJ14-08), the Funding of Innovation Program for Graduate Education of Jiangsu Province, China (Grant No. KYLX 0277), the Fundamental Research Funds for the Central Universities, China (Grant No. 3082015NP2015504), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), China.

  7. An iterative fullwave simulation approach to multiple scattering in media with randomly distributed microbubbles

    NASA Astrophysics Data System (ADS)

    Joshi, Aditya; Lindsey, Brooks D.; Dayton, Paul A.; Pinton, Gianmarco; Muller, Marie

    2017-05-01

    Ultrasound contrast agents (UCA), such as microbubbles, enhance the scattering properties of blood, which is otherwise hypoechoic. The multiple scattering interactions of the acoustic field with UCA are poorly understood due to the complexity of the multiple scattering theories and the nonlinear microbubble response. The majority of bubble models describe the behavior of UCA as single, isolated microbubbles suspended in infinite medium. Multiple scattering models such as the independent scattering approximation can approximate phase velocity and attenuation for low scatterer volume fractions. However, all current models and simulation approaches only describe multiple scattering and nonlinear bubble dynamics separately. Here we present an approach that combines two existing models: (1) a full-wave model that describes nonlinear propagation and scattering interactions in a heterogeneous attenuating medium and (2) a Paul-Sarkar model that describes the nonlinear interactions between an acoustic field and microbubbles. These two models were solved numerically and combined with an iterative approach. The convergence of this combined model was explored in silico for 0.5 × 106 microbubbles ml-1, 1% and 2% bubble concentration by volume. The backscattering predicted by our modeling approach was verified experimentally with water tank measurements performed with a 128-element linear array transducer. An excellent agreement in terms of the fundamental and harmonic acoustic fields is shown. Additionally, our model correctly predicts the phase velocity and attenuation measured using through transmission and predicted by the independent scattering approximation.

  8. Taking a(c)count of eye movements: Multiple mechanisms underlie fixations during enumeration.

    PubMed

    Paul, Jacob M; Reeve, Robert A; Forte, Jason D

    2017-03-01

    We habitually move our eyes when we enumerate sets of objects. It remains unclear whether saccades are directed for numerosity processing as distinct from object-oriented visual processing (e.g., object saliency, scanning heuristics). Here we investigated the extent to which enumeration eye movements are contingent upon the location of objects in an array, and whether fixation patterns vary with enumeration demands. Twenty adults enumerated random dot arrays twice: first to report the set cardinality and second to judge the perceived number of subsets. We manipulated the spatial location of dots by presenting arrays at 0°, 90°, 180°, and 270° orientations. Participants required a similar time to enumerate the set or the perceived number of subsets in the same array. Fixation patterns were systematically shifted in the direction of array rotation, and distributed across similar locations when the same array was shown on multiple occasions. We modeled fixation patterns and dot saliency using a simple filtering model and show participants judged groups of dots in close proximity (2°-2.5° visual angle) as distinct subsets. Modeling results are consistent with the suggestion that enumeration involves visual grouping mechanisms based on object saliency, and specific enumeration demands affect spatial distribution of fixations. Our findings highlight the importance of set computation, rather than object processing per se, for models of numerosity processing.

  9. Hidden Connectivity in Networks with Vulnerable Classes of Nodes

    NASA Astrophysics Data System (ADS)

    Krause, Sebastian M.; Danziger, Michael M.; Zlatić, Vinko

    2016-10-01

    In many complex systems representable as networks, nodes can be separated into different classes. Often these classes can be linked to a mutually shared vulnerability. Shared vulnerabilities may be due to a shared eavesdropper or correlated failures. In this paper, we show the impact of shared vulnerabilities on robust connectivity and how the heterogeneity of node classes can be exploited to maintain functionality by utilizing multiple paths. Percolation is the field of statistical physics that is generally used to analyze connectivity in complex networks, but in its existing forms, it cannot treat the heterogeneity of multiple vulnerable classes. To analyze the connectivity under these constraints, we describe each class as a color and develop a "color-avoiding" percolation. We present an analytic theory for random networks and a numerical algorithm for all networks, with which we can determine which nodes are color-avoiding connected and whether the maximal set percolates in the system. We find that the interaction of topology and color distribution implies a rich critical behavior, with critical values and critical exponents depending both on the topology and on the color distribution. Applying our physics-based theory to the Internet, we show how color-avoiding percolation can be used as the basis for new topologically aware secure communication protocols. Beyond applications to cybersecurity, our framework reveals a new layer of hidden structure in a wide range of natural and technological systems.

  10. Extending the Distributed Lag Model framework to handle chemical mixtures.

    PubMed

    Bello, Ghalib A; Arora, Manish; Austin, Christine; Horton, Megan K; Wright, Robert O; Gennings, Chris

    2017-07-01

    Distributed Lag Models (DLMs) are used in environmental health studies to analyze the time-delayed effect of an exposure on an outcome of interest. Given the increasing need for analytical tools for evaluation of the effects of exposure to multi-pollutant mixtures, this study attempts to extend the classical DLM framework to accommodate and evaluate multiple longitudinally observed exposures. We introduce 2 techniques for quantifying the time-varying mixture effect of multiple exposures on an outcome of interest. Lagged WQS, the first technique, is based on Weighted Quantile Sum (WQS) regression, a penalized regression method that estimates mixture effects using a weighted index. We also introduce Tree-based DLMs, a nonparametric alternative for assessment of lagged mixture effects. This technique is based on the Random Forest (RF) algorithm, a nonparametric, tree-based estimation technique that has shown excellent performance in a wide variety of domains. In a simulation study, we tested the feasibility of these techniques and evaluated their performance in comparison to standard methodology. Both methods exhibited relatively robust performance, accurately capturing pre-defined non-linear functional relationships in different simulation settings. Further, we applied these techniques to data on perinatal exposure to environmental metal toxicants, with the goal of evaluating the effects of exposure on neurodevelopment. Our methods identified critical neurodevelopmental windows showing significant sensitivity to metal mixtures. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  12. Calculation of absolute protein-ligand binding free energy using distributed replica sampling.

    PubMed

    Rodinger, Tomas; Howell, P Lynne; Pomès, Régis

    2008-10-21

    Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.

  13. Calculation of absolute protein-ligand binding free energy using distributed replica sampling

    NASA Astrophysics Data System (ADS)

    Rodinger, Tomas; Howell, P. Lynne; Pomès, Régis

    2008-10-01

    Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.

  14. Comparison of multiplicity distributions to the negative binomial distribution in muon-proton scattering

    NASA Astrophysics Data System (ADS)

    Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badełek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I.; Blum, D.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Ftáčnik, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffré, M.; Jachołkowska, A.; Janata, F.; Jancsó, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettinghale, J.; Pietrzyk, B.; Pietrzyk, U.; Pönsgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Sandacz, A.; Scheer, M.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Schneider, A.; Scholz, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thénard, J. M.; Thompson, J. C.; de La Torre, A.; Toth, J.; Urban, L.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.

    1987-09-01

    The multiplicity distributions of charged hadrons produced in the deep inelastic muon-proton scattering at 280 GeV are analysed in various rapidity intervals, as a function of the total hadronic centre of mass energy W ranging from 4 20 GeV. Multiplicity distributions for the backward and forward hemispheres are also analysed separately. The data can be well parameterized by binomial distributions, extending their range of applicability to the case of lepton-proton scattering. The energy and the rapidity dependence of the parameters is presented and a smooth transition from the negative binomial distribution via Poissonian to the ordinary binomial is observed.

  15. Effective transport properties of composites of spheres

    NASA Astrophysics Data System (ADS)

    Felderhof, B. U.

    1994-06-01

    The effective linear transport properties of composites of spheres may be studied by the methods of statistical physics. The analysis leads to an exact cluster expansion. The resulting expression for the transport coefficients may be evaluated approximately as the sum of a mean field contribution and correction terms, given by cluster integrals over two-sphere and three-sphere correlation functions. Calculations of this nature have been performed for the effective dielectric constant, as well as the effective elastic constants of composites of spheres. Accurate numerical data for the effective properties may be obtained by computer simulation. An efficient formulation uses multiple expansion in Cartesian coordinates and periodic boundary conditions. Extensive numerical results have been obtained for the effective dielectric constant of a suspension of randomly distributed spheres.

  16. An entropy-based nonparametric test for the validation of surrogate endpoints.

    PubMed

    Miao, Xiaopeng; Wang, Yong-Cheng; Gangopadhyay, Ashis

    2012-06-30

    We present a nonparametric test to validate surrogate endpoints based on measure of divergence and random permutation. This test is a proposal to directly verify the Prentice statistical definition of surrogacy. The test does not impose distributional assumptions on the endpoints, and it is robust to model misspecification. Our simulation study shows that the proposed nonparametric test outperforms the practical test of the Prentice criterion in terms of both robustness of size and power. We also evaluate the performance of three leading methods that attempt to quantify the effect of surrogate endpoints. The proposed method is applied to validate magnetic resonance imaging lesions as the surrogate endpoint for clinical relapses in a multiple sclerosis trial. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Dynamical behaviors of inter-out-of-equilibrium state intervals in Korean futures exchange markets

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Kim, Kyungsik; Lee, Dong-In; Scalas, Enrico

    2008-05-01

    A recently discovered feature of financial markets, the two-phase phenomenon, is utilized to categorize a financial time series into two phases, namely equilibrium and out-of-equilibrium states. For out-of-equilibrium states, we analyze the time intervals at which the state is revisited. The power-law distribution of inter-out-of-equilibrium state intervals is shown and we present an analogy with discrete-time heat bath dynamics, similar to random Ising systems. In the mean-field approximation, this model reduces to a one-dimensional multiplicative process. By varying global and local model parameters, the relevance between volatilities in financial markets and the interaction strengths between agents in the Ising model are investigated and discussed.

  18. Predicting active-layer soil thickness using topographic variables at a small watershed scale

    PubMed Central

    Li, Aidi; Tan, Xing; Wu, Wei; Liu, Hongbin; Zhu, Jie

    2017-01-01

    Knowledge about the spatial distribution of active-layer (AL) soil thickness is indispensable for ecological modeling, precision agriculture, and land resource management. However, it is difficult to obtain the details on AL soil thickness by using conventional soil survey method. In this research, the objective is to investigate the possibility and accuracy of mapping the spatial distribution of AL soil thickness through random forest (RF) model by using terrain variables at a small watershed scale. A total of 1113 soil samples collected from the slope fields were randomly divided into calibration (770 soil samples) and validation (343 soil samples) sets. Seven terrain variables including elevation, aspect, relative slope position, valley depth, flow path length, slope height, and topographic wetness index were derived from a digital elevation map (30 m). The RF model was compared with multiple linear regression (MLR), geographically weighted regression (GWR) and support vector machines (SVM) approaches based on the validation set. Model performance was evaluated by precision criteria of mean error (ME), mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2). Comparative results showed that RF outperformed MLR, GWR and SVM models. The RF gave better values of ME (0.39 cm), MAE (7.09 cm), and RMSE (10.85 cm) and higher R2 (62%). The sensitivity analysis demonstrated that the DEM had less uncertainty than the AL soil thickness. The outcome of the RF model indicated that elevation, flow path length and valley depth were the most important factors affecting the AL soil thickness variability across the watershed. These results demonstrated the RF model is a promising method for predicting spatial distribution of AL soil thickness using terrain parameters. PMID:28877196

  19. Optimizing placements of ground-based snow sensors for areal snow cover estimation using a machine-learning algorithm and melt-season snow-LiDAR data

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Zheng, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2016-12-01

    We present a structured, analytical approach to optimize ground-sensor placements based on time-series remotely sensed (LiDAR) data and machine-learning algorithms. We focused on catchments within the Merced and Tuolumne river basins, covered by the JPL Airborne Snow Observatory LiDAR program. First, we used a Gaussian mixture model to identify representative sensor locations in the space of independent variables for each catchment. Multiple independent variables that govern the distribution of snow depth were used, including elevation, slope, and aspect. Second, we used a Gaussian process to estimate the areal distribution of snow depth from the initial set of measurements. This is a covariance-based model that also estimates the areal distribution of model uncertainty based on the independent variable weights and autocorrelation. The uncertainty raster was used to strategically add sensors to minimize model uncertainty. We assessed the temporal accuracy of the method using LiDAR-derived snow-depth rasters collected in water-year 2014. In each area, optimal sensor placements were determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys was compared to 100 configurations of sensors selected at random. We found the accuracy of the model from the proposed placements to be higher and more consistent in each remaining survey than the average random configuration. We found that a relatively small number of sensors can be used to accurately reproduce the spatial patterns of snow depth across the basins, when placed using spatial snow data. Our approach also simplifies sensor placement. At present, field surveys are required to identify representative locations for such networks, a process that is labor intensive and provides limited guarantees on the networks' representation of catchment independent variables.

  20. Atomoxetine pharmacokinetics in healthy Chinese subjects and effect of the CYP2D6*10 allele.

    PubMed

    Cui, Yi M; Teng, Choo H; Pan, Alan X; Yuen, Eunice; Yeo, Kwee P; Zhou, Ying; Zhao, Xia; Long, Amanda J; Bangs, Mark E; Wise, Stephen D

    2007-10-01

    To characterize atomoxetine pharmacokinetics, explore the effect of the homozygous CYP2D6*10 genotype on atomoxetine pharmacokinetics and evaluate the tolerability of atomoxetine, in healthy Chinese subjects. Twenty-four subjects, all CYP2D6 extensive metabolizers (EM), were randomized to receive atomoxetine (40 mg qd for 3 days, then 80 mg qd for 7 days) or matching placebo (2 : 1 ratio) in a double-blind fashion. Atomoxetine serum concentrations were measured following single (40 mg) and multiple (80 mg) doses. Adverse events, clinical safety laboratory data and vital signs were assessed during the study. Atomoxetine was rapidly absorbed with median time to maximum serum concentrations of approximately 1.5 h after single and multiple doses. Atomoxetine concentrations appeared to decrease monoexponentially with a mean apparent terminal half-life (t(1/2)) of approximately 4 h. The apparent clearance, apparent volume of distribution and t(1/2) following single and multiple doses were similar, suggesting linear pharmacokinetics with respect to time. Homozygous CYP2D6*10 subjects had 50% lower clearances compared with other EM subjects, resulting in twofold higher mean exposures. No clinically significant changes or abnormalities were noted in laboratory data and vital signs. The pharmacokinetics of atomoxetine in healthy Chinese subjects appears comparable to other ethnic populations. Multiple dosing of 80 mg qd atomoxetine was well tolerated in this study.

  1. Alignment between Protostellar Outflows and Filamentary Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephens, Ian W.; Dunham, Michael M.; Myers, Philip C.

    2017-09-01

    We present new Submillimeter Array (SMA) observations of CO(2–1) outflows toward young, embedded protostars in the Perseus molecular cloud as part of the Mass Assembly of Stellar Systems and their Evolution with the SMA (MASSES) survey. For 57 Perseus protostars, we characterize the orientation of the outflow angles and compare them with the orientation of the local filaments as derived from Herschel observations. We find that the relative angles between outflows and filaments are inconsistent with purely parallel or purely perpendicular distributions. Instead, the observed distribution of outflow-filament angles are more consistent with either randomly aligned angles or a mixmore » of projected parallel and perpendicular angles. A mix of parallel and perpendicular angles requires perpendicular alignment to be more common by a factor of ∼3. Our results show that the observed distributions probably hold regardless of the protostar’s multiplicity, age, or the host core’s opacity. These observations indicate that the angular momentum axis of a protostar may be independent of the large-scale structure. We discuss the significance of independent protostellar rotation axes in the general picture of filament-based star formation.« less

  2. Physical activity staging distribution: establishing a heuristic using multiple studies.

    PubMed

    Nigg, C; Hellsten, L; Norman, G; Braun, L; Breger, R; Burbank, P; Coday, M; Elliot, D; Garber, C; Greaney, M; Keteyian, S; Lees, F; Matthews, C; Moe, E; Resnick, B; Riebe, D; Rossi, J; Toobert, D; Wang, T; Welk, G; Williams, G

    2005-04-01

    The purpose of this study was to identify the population prevalence across the stages of change (SoC) for regular physical activity and to establish the prevalence of people at risk. With support from the National Institutes of Health, the American Heart Association, and the Robert Wood Johnson Foundation, nine Behavior Change Consortium studies with a common physical activity SoC measure agreed to collaborate and share data. The distribution pattern identified in these predominantly reactively recruited studies was Precontemplation (PC) = 5% (+/- 10), Contemplation (C) = 10% (+/- 10), Preparation (P) = 40% (+/- 10), Action = 10% (+/- 10), and Maintenance = 35% (+/- 10). With reactively recruited studies, it can be anticipated that there will be a higher percentage of the sample that is ready to change and a greater percentage of currently active people compared to random representative samples. The at-risk stage distribution (i.e., those not at criteria or PC, C, and P) was approximately 10% PC, 20% C, and 70% P in specific samples and approximately 20% PC, 10% C, and 70% P in the clinical samples. Knowing SoC heuristics can inform public health practitioners and policymakers about the population's motivation for physical activity, help track changes over time, and assist in the allocation of resources.

  3. The Dynamics of Power laws: Fitness and Aging in Preferential Attachment Trees

    NASA Astrophysics Data System (ADS)

    Garavaglia, Alessandro; van der Hofstad, Remco; Woeginger, Gerhard

    2017-09-01

    Continuous-time branching processes describe the evolution of a population whose individuals generate a random number of children according to a birth process. Such branching processes can be used to understand preferential attachment models in which the birth rates are linear functions. We are motivated by citation networks, where power-law citation counts are observed as well as aging in the citation patterns. To model this, we introduce fitness and age-dependence in these birth processes. The multiplicative fitness moderates the rate at which children are born, while the aging is integrable, so that individuals receives a finite number of children in their lifetime. We show the existence of a limiting degree distribution for such processes. In the preferential attachment case, where fitness and aging are absent, this limiting degree distribution is known to have power-law tails. We show that the limiting degree distribution has exponential tails for bounded fitnesses in the presence of integrable aging, while the power-law tail is restored when integrable aging is combined with fitness with unbounded support with at most exponential tails. In the absence of integrable aging, such processes are explosive.

  4. Random walk to a nonergodic equilibrium concept

    NASA Astrophysics Data System (ADS)

    Bel, G.; Barkai, E.

    2006-01-01

    Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.

  5. Anisotropy Induced Switching Field Distribution in High-Density Patterned Media

    NASA Astrophysics Data System (ADS)

    Talapatra, A.; Mohanty, J.

    We present here micromagnetic study of variation of switching field distribution (SFD) in a high-density patterned media as a function of magnetic anisotropy of the system. We consider the manifold effect of magnetic anisotropy in terms of its magnitude, tilt in anisotropy axis and random arrangements of magnetic islands with random anisotropy values. Our calculation shows that reduction in anisotropy causes linear decrease in coercivity because the anisotropy energy tries to align the spins along a preferred crystallographic direction. Tilt in anisotropy axis results in decrease in squareness of the hysteresis loop and hence facilitates switching. Finally, the experimental challenges like lithographic distribution of magnetic islands, their orientation, creation of defects, etc. demanded the distribution of anisotropy to be random along with random repetitions. We have explained that the range of anisotropy values and the number of bits with different anisotropy play a key role over SFD, whereas the position of the bits and their repetitions do not show a considerable contribution.

  6. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    NASA Technical Reports Server (NTRS)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  7. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  8. Decisions with Uncertain Consequences—A Total Ordering on Loss-Distributions

    PubMed Central

    König, Sandra; Schauer, Stefan

    2016-01-01

    Decisions are often based on imprecise, uncertain or vague information. Likewise, the consequences of an action are often equally unpredictable, thus putting the decision maker into a twofold jeopardy. Assuming that the effects of an action can be modeled by a random variable, then the decision problem boils down to comparing different effects (random variables) by comparing their distribution functions. Although the full space of probability distributions cannot be ordered, a properly restricted subset of distributions can be totally ordered in a practically meaningful way. We call these loss-distributions, since they provide a substitute for the concept of loss-functions in decision theory. This article introduces the theory behind the necessary restrictions and the hereby constructible total ordering on random loss variables, which enables decisions under uncertainty of consequences. Using data obtained from simulations, we demonstrate the practical applicability of our approach. PMID:28030572

  9. Biological monitoring of environmental quality: The use of developmental instability

    USGS Publications Warehouse

    Freeman, D.C.; Emlen, J.M.; Graham, J.H.; Hough, R. A.; Bannon, T.A.

    1994-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails.

  10. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  11. Differentiation of fat, muscle, and edema in thigh MRIs using random forest classification

    NASA Astrophysics Data System (ADS)

    Kovacs, William; Liu, Chia-Ying; Summers, Ronald M.; Yao, Jianhua

    2016-03-01

    There are many diseases that affect the distribution of muscles, including Duchenne and fascioscapulohumeral dystrophy among other myopathies. In these disease cases, it is important to quantify both the muscle and fat volumes to track the disease progression. There has also been evidence that abnormal signal intensity on the MR images, which often is an indication of edema or inflammation can be a good predictor for muscle deterioration. We present a fully-automated method that examines magnetic resonance (MR) images of the thigh and identifies the fat, muscle, and edema using a random forest classifier. First the thigh regions are automatically segmented using the T1 sequence. Then, inhomogeneity artifacts were corrected using the N3 technique. The T1 and STIR (short tau inverse recovery) images are then aligned using landmark based registration with the bone marrow. The normalized T1 and STIR intensity values are used to train the random forest. Once trained, the random forest can accurately classify the aforementioned classes. This method was evaluated on MR images of 9 patients. The precision values are 0.91+/-0.06, 0.98+/-0.01 and 0.50+/-0.29 for muscle, fat, and edema, respectively. The recall values are 0.95+/-0.02, 0.96+/-0.03 and 0.43+/-0.09 for muscle, fat, and edema, respectively. This demonstrates the feasibility of utilizing information from multiple MR sequences for the accurate quantification of fat, muscle and edema.

  12. Data Randomization and Cluster-Based Partitioning for Botnet Intrusion Detection.

    PubMed

    Al-Jarrah, Omar Y; Alhussein, Omar; Yoo, Paul D; Muhaidat, Sami; Taha, Kamal; Kim, Kwangjo

    2016-08-01

    Botnets, which consist of remotely controlled compromised machines called bots, provide a distributed platform for several threats against cyber world entities and enterprises. Intrusion detection system (IDS) provides an efficient countermeasure against botnets. It continually monitors and analyzes network traffic for potential vulnerabilities and possible existence of active attacks. A payload-inspection-based IDS (PI-IDS) identifies active intrusion attempts by inspecting transmission control protocol and user datagram protocol packet's payload and comparing it with previously seen attacks signatures. However, the PI-IDS abilities to detect intrusions might be incapacitated by packet encryption. Traffic-based IDS (T-IDS) alleviates the shortcomings of PI-IDS, as it does not inspect packet payload; however, it analyzes packet header to identify intrusions. As the network's traffic grows rapidly, not only the detection-rate is critical, but also the efficiency and the scalability of IDS become more significant. In this paper, we propose a state-of-the-art T-IDS built on a novel randomized data partitioned learning model (RDPLM), relying on a compact network feature set and feature selection techniques, simplified subspacing and a multiple randomized meta-learning technique. The proposed model has achieved 99.984% accuracy and 21.38 s training time on a well-known benchmark botnet dataset. Experiment results demonstrate that the proposed methodology outperforms other well-known machine-learning models used in the same detection task, namely, sequential minimal optimization, deep neural network, C4.5, reduced error pruning tree, and randomTree.

  13. Signs of universality in the structure of culture

    NASA Astrophysics Data System (ADS)

    Băbeanu, Alexandru-Ionuţ; Talman, Leandros; Garlaschelli, Diego

    2017-11-01

    Understanding the dynamics of opinions, preferences and of culture as whole requires more use of empirical data than has been done so far. It is clear that an important role in driving this dynamics is played by social influence, which is the essential ingredient of many quantitative models. Such models require that all traits are fixed when specifying the "initial cultural state". Typically, this initial state is randomly generated, from a uniform distribution over the set of possible combinations of traits. However, recent work has shown that the outcome of social influence dynamics strongly depends on the nature of the initial state. If the latter is sampled from empirical data instead of being generated in a uniformly random way, a higher level of cultural diversity is found after long-term dynamics, for the same level of propensity towards collective behavior in the short-term. Moreover, if the initial state is randomized by shuffling the empirical traits among people, the level of long-term cultural diversity is in-between those obtained for the empirical and uniformly random counterparts. The current study repeats the analysis for multiple empirical data sets, showing that the results are remarkably similar, although the matrix of correlations between cultural variables clearly differs across data sets. This points towards robust structural properties inherent in empirical cultural states, possibly due to universal laws governing the dynamics of culture in the real world. The results also suggest that this dynamics might be characterized by criticality and involve mechanisms beyond social influence.

  14. On the genealogy of branching random walks and of directed polymers

    NASA Astrophysics Data System (ADS)

    Derrida, Bernard; Mottishaw, Peter

    2016-08-01

    It is well known that the mean-field theory of directed polymers in a random medium exhibits replica symmetry breaking with a distribution of overlaps which consists of two delta functions. Here we show that the leading finite-size correction to this distribution of overlaps has a universal character which can be computed explicitly. Our results can also be interpreted as genealogical properties of branching Brownian motion or of branching random walks.

  15. Effects of absorption on multiple scattering by random particulate media: exact results.

    PubMed

    Mishchenko, Michael I; Liu, Li; Hovenier, Joop W

    2007-10-01

    We employ the numerically exact superposition T-matrix method to perform extensive computations of elec nottromagnetic scattering by a volume of discrete random medium densely filled with increasingly absorbing as well as non-absorbing particles. Our numerical data demonstrate that increasing absorption diminishes and nearly extinguishes certain optical effects such as depolarization and coherent backscattering and increases the angular width of coherent backscattering patterns. This result corroborates the multiple-scattering origin of such effects and further demonstrates the heuristic value of the concept of multiple scattering even in application to densely packed particulate media.

  16. Evaluation of Scat Deposition Transects versus Radio Telemetry for Developing a Species Distribution Model for a Rare Desert Carnivore, the Kit Fox.

    PubMed

    Dempsey, Steven J; Gese, Eric M; Kluever, Bryan M; Lonsinger, Robert C; Waits, Lisette P

    2015-01-01

    Development and evaluation of noninvasive methods for monitoring species distribution and abundance is a growing area of ecological research. While noninvasive methods have the advantage of reduced risk of negative factors associated with capture, comparisons to methods using more traditional invasive sampling is lacking. Historically kit foxes (Vulpes macrotis) occupied the desert and semi-arid regions of southwestern North America. Once the most abundant carnivore in the Great Basin Desert of Utah, the species is now considered rare. In recent decades, attempts have been made to model the environmental variables influencing kit fox distribution. Using noninvasive scat deposition surveys for determination of kit fox presence, we modeled resource selection functions to predict kit fox distribution using three popular techniques (Maxent, fixed-effects, and mixed-effects generalized linear models) and compared these with similar models developed from invasive sampling (telemetry locations from radio-collared foxes). Resource selection functions were developed using a combination of landscape variables including elevation, slope, aspect, vegetation height, and soil type. All models were tested against subsequent scat collections as a method of model validation. We demonstrate the importance of comparing multiple model types for development of resource selection functions used to predict a species distribution, and evaluating the importance of environmental variables on species distribution. All models we examined showed a large effect of elevation on kit fox presence, followed by slope and vegetation height. However, the invasive sampling method (i.e., radio-telemetry) appeared to be better at determining resource selection, and therefore may be more robust in predicting kit fox distribution. In contrast, the distribution maps created from the noninvasive sampling (i.e., scat transects) were significantly different than the invasive method, thus scat transects may be appropriate when used in an occupancy framework to predict species distribution. We concluded that while scat deposition transects may be useful for monitoring kit fox abundance and possibly occupancy, they do not appear to be appropriate for determining resource selection. On our study area, scat transects were biased to roadways, while data collected using radio-telemetry was dictated by movements of the kit foxes themselves. We recommend that future studies applying noninvasive scat sampling should consider a more robust random sampling design across the landscape (e.g., random transects or more complete road coverage) that would then provide a more accurate and unbiased depiction of resource selection useful to predict kit fox distribution.

  17. All optical mode controllable Er-doped random fiber laser with distributed Bragg gratings.

    PubMed

    Zhang, W L; Ma, R; Tang, C H; Rao, Y J; Zeng, X P; Yang, Z J; Wang, Z N; Gong, Y; Wang, Y S

    2015-07-01

    An all-optical method to control the lasing modes of Er-doped random fiber lasers (RFLs) is proposed and demonstrated. In the RFL, an Er-doped fiber (EDF) recoded with randomly separated fiber Bragg gratings (FBG) is used as the gain medium and randomly distributed reflectors, as well as the controllable element. By combining random feedback of the FBG array and Fresnel feedback of a cleaved fiber end, multi-mode coherent random lasing is obtained with a threshold of 14 mW and power efficiency of 14.4%. Moreover, a laterally-injected control light is used to induce local gain perturbation, providing additional gain for certain random resonance modes. As a result, active mode selection of the RFL is realized by changing locations of the laser cavity that is exposed to the control light.

  18. SU-F-T-192: Study of Robustness Analysis Method of Multiple Field Optimized IMPT Plans for Head & Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Wang, X; Li, H

    Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less

  19. Normal and compound poisson approximations for pattern occurrences in NGS reads.

    PubMed

    Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu

    2012-06-01

    Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).

  20. A unified procedure for meta-analytic evaluation of surrogate end points in randomized clinical trials

    PubMed Central

    Dai, James Y.; Hughes, James P.

    2012-01-01

    The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448

  1. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  2. Phase information contained in meter-scale SAR images

    NASA Astrophysics Data System (ADS)

    Datcu, Mihai; Schwarz, Gottfried; Soccorsi, Matteo; Chaabouni, Houda

    2007-10-01

    The properties of single look complex SAR satellite images have already been analyzed by many investigators. A common belief is that, apart from inverse SAR methods or polarimetric applications, no information can be gained from the phase of each pixel. This belief is based on the assumption that we obtain uniformly distributed random phases when a sufficient number of small-scale scatterers are mixed in each image pixel. However, the random phase assumption does no longer hold for typical high resolution urban remote sensing scenes, when a limited number of prominent human-made scatterers with near-regular shape and sub-meter size lead to correlated phase patterns. If the pixel size shrinks to a critical threshold of about 1 meter, the reflectance of built-up urban scenes becomes dominated by typical metal reflectors, corner-like structures, and multiple scattering. The resulting phases are hard to model, but one can try to classify a scene based on the phase characteristics of neighboring image pixels. We provide a "cooking recipe" of how to analyze existing phase patterns that extend over neighboring pixels.

  3. MreB is important for cell shape but not for chromosome segregation of the filamentous cyanobacterium Anabaena sp. PCC 7120.

    PubMed

    Hu, Bin; Yang, Guohua; Zhao, Weixing; Zhang, Yingjiao; Zhao, Jindong

    2007-03-01

    MreB is a bacterial actin that plays important roles in determination of cell shape and chromosome partitioning in Escherichia coli and Caulobacter crescentus. In this study, the mreB from the filamentous cyanobacterium Anabaena sp. PCC 7120 was inactivated. Although the mreB null mutant showed a drastic change in cell shape, its growth rate, cell division and the filament length were unaltered. Thus, MreB in Anabaena maintains cell shape but is not required for chromosome partitioning. The wild type and the mutant had eight and 10 copies of chromosomes per cell respectively. We demonstrated that DNA content in two daughter cells after cell division in both strains was not always identical. The ratios of DNA content in two daughter cells had a Gaussian distribution with a standard deviation much larger than a value expected if the DNA content in two daughter cells were identical, suggesting that chromosome partitioning is a random process. The multiple copies of chromosomes in cyanobacteria are likely required for chromosome random partitioning in cell division.

  4. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  5. Transcription, intercellular variability and correlated random walk.

    PubMed

    Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar

    2008-11-01

    We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.

  6. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  7. Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion.

    PubMed

    Wang, Yang; Zhang, Wenjie; Wu, Lin; Lin, Xuemin; Zhao, Xiang

    2017-01-01

    Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.

  8. Human placenta-derived cells (PDA-001) for the treatment of adults with multiple sclerosis: a randomized, placebo-controlled, multiple-dose study.

    PubMed

    Lublin, Fred D; Bowen, James D; Huddlestone, John; Kremenchutzky, Marcelo; Carpenter, Adam; Corboy, John R; Freedman, Mark S; Krupp, Lauren; Paulo, Corri; Hariri, Robert J; Fischkoff, Steven A

    2014-11-01

    Infusion of PDA-001, a preparation of mesenchymal-like cells derived from full-term human placenta, is a new approach in the treatment of patients with multiple sclerosis. This safety study aimed to rule out the possibility of paradoxical exacerbation of disease activity by PDA-001 in patients with multiple sclerosis. This was a phase 1b, multicenter, randomized, double-blind, placebo-controlled, 2-dose ranging study including patients with relapsing-remitting multiple sclerosis or secondary progressive multiple sclerosis. The study was conducted at 6 sites in the United States and 2 sites in Canada. Patients were randomized 3:1 to receive 2 low-dose infusions of PDA-001 (150×10(6) cells) or placebo, given 1 week apart. After completing this cohort, subsequent patients received high-dose PDA-001 (600×10(6) cells) or placebo. Monthly brain magnetic resonance imaging scans were performed. The primary end point was ruling out the possibility of paradoxical worsening of MS disease activity. This was monitored using Cutter׳s rule (≥5 new gadolinium lesions on 2 consecutive scans) by brain magnetic resonance imaging on a monthly basis for six months and also the frequency of multiple sclerosis relapse. Ten patients with relapsing-remitting multiple sclerosis and 6 with secondary progressive multiple sclerosis were randomly assigned to treatment: 6 to low-dose PDA-001, 6 to high-dose PDA-001, and 4 to placebo. No patient met Cutter׳s rule. One patient receiving high-dose PDA-001 had an increase in T2 and gadolinium lesions and in Expanded Disability Status Scale score during a multiple sclerosis flare 5 months after receiving PDA-001. No other patient had an increase in Expanded Disability Status Scale score>0.5, and most had stable or decreasing Expanded Disability Status Scale scores. With high-dose PDA-001, 1 patient experienced a grade 1 anaphylactoid reaction and 1 had grade 2 superficial thrombophlebitis. Other adverse events were mild to moderate and included headache, fatigue, infusion site reactions, and urinary tract infection. PDA-001 infusions were safe and well tolerated in relapsing-remitting multiple sclerosis and secondary progressive multiple sclerosis patients. No paradoxical worsening of lesion counts was noted with either dose. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Theoretical size distribution of fossil taxa: analysis of a null model

    PubMed Central

    Reed, William J; Hughes, Barry D

    2007-01-01

    Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249

  10. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  11. Empirical scaling of the length of the longest increasing subsequences of random walks

    NASA Astrophysics Data System (ADS)

    Mendonça, J. Ricardo G.

    2017-02-01

    We provide Monte Carlo estimates of the scaling of the length L n of the longest increasing subsequences of n-step random walks for several different distributions of step lengths, short and heavy-tailed. Our simulations indicate that, barring possible logarithmic corrections, {{L}n}∼ {{n}θ} with the leading scaling exponent 0.60≲ θ ≲ 0.69 for the heavy-tailed distributions of step lengths examined, with values increasing as the distribution becomes more heavy-tailed, and θ ≃ 0.57 for distributions of finite variance, irrespective of the particular distribution. The results are consistent with existing rigorous bounds for θ, although in a somewhat surprising manner. For random walks with step lengths of finite variance, we conjecture that the correct asymptotic behavior of L n is given by \\sqrt{n}\\ln n , and also propose the form for the subleading asymptotics. The distribution of L n was found to follow a simple scaling form with scaling functions that vary with θ. Accordingly, when the step lengths are of finite variance they seem to be universal. The nature of this scaling remains unclear, since we lack a working model, microscopic or hydrodynamic, for the behavior of the length of the longest increasing subsequences of random walks.

  12. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. An application of randomization for detecting evidence of thermoregulation in timber rattlesnakes (Crotalus horridus) from northwest Arkansas.

    PubMed

    Wills, C A; Beaupre, S J

    2000-01-01

    Most reptiles maintain their body temperatures within normal functional ranges through behavioral thermoregulation. Under some circumstances, thermoregulation may be a time-consuming activity, and thermoregulatory needs may impose significant constraints on the activities of ectotherms. A necessary (but not sufficient) condition for demonstrating thermoregulation is a difference between observed body temperature distributions and available operative temperature distributions. We examined operative and body temperature distributions of the timber rattlesnake (Crotalus horridus) for evidence of thermoregulation. Specifically, we compared the distribution of available operative temperatures in the environment to snake body temperatures during August and September. Operative temperatures were measured using 48 physical models that were randomly deployed in the environment and connected to a Campbell CR-21X data logger. Body temperatures (n=1,803) were recorded from 12 radiotagged snakes using temperature-sensitive telemetry. Separate randomization tests were conducted for each hour of day within each month. Actual body temperature distributions differed significantly from operative temperature distributions at most time points considered. Thus, C. horridus exhibits a necessary (but not sufficient) condition for demonstrating thermoregulation. However, unlike some desert ectotherms, we found no compelling evidence for thermal constraints on surface activity. Randomization may prove to be a powerful technique for drawing inferences about thermoregulation without reliance on studies of laboratory thermal preference.

  14. Assessments of the quality of randomized controlled trials published in International Journal of Urology from 1994 to 2011.

    PubMed

    Cho, Hee Ju; Chung, Jae Hoon; Jo, Jung Ki; Kang, Dong Hyuk; Cho, Jeong Man; Yoo, Tag Keun; Lee, Seung Wook

    2013-12-01

    Randomized controlled trials are one of the most reliable resources for assessing the effectiveness and safety of medical treatments. Low quality randomized controlled trials carry a large bias that can ultimately impair the reliability of their conclusions. The present study aimed to evaluate the quality of randomized controlled trials published in International Journal of Urology by using multiple quality assessment tools. Randomized controlled trials articles published in International Journal of Urology were found using the PubMed MEDLINE database, and qualitative analysis was carried out with three distinct assessment tools: the Jadad scale, the van Tulder scale and the Cochrane Collaboration Risk of Bias Tool. The quality of randomized controlled trials was analyzed by publication year, type of subjects, intervention, presence of funding and whether an institutional review board reviewed the study. A total of 68 randomized controlled trial articles were published among a total of 1399 original articles in International Journal of Urology. Among these randomized controlled trials, 10 (2.70%) were from 1994 to 1999, 23 (4.10%) were from 2000 to 2005 and 35 (4.00%) were from 2006 to 2011 (P = 0.494). On the assessment with the Jadad and van Tulder scale, the numbers and percentage of high quality randomized controlled trials increased over time. The studies that had institutional review board reviews, funding resources or that were carried out in multiple institutions had an increased percentage of high quality articles. The numbers and percentage of high-quality randomized controlled trials published in International Journal of Urology have increased over time. Furthermore, randomized controlled trials with funding resources, institutional review board reviews or carried out in multiple institutions have been found to be of higher quality compared with others not presenting these features. © 2013 The Japanese Urological Association.

  15. A stochastic-geometric model of soil variation in Pleistocene patterned ground

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc

    2013-04-01

    In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.

  16. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  17. Open quantum random walk in terms of quantum Bernoulli noise

    NASA Astrophysics Data System (ADS)

    Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling

    2018-03-01

    In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.

  18. Experimental demonstration of an active phase randomization and monitor module for quantum key distribution

    NASA Astrophysics Data System (ADS)

    Sun, Shi-Hai; Liang, Lin-Mei

    2012-08-01

    Phase randomization is a very important assumption in the BB84 quantum key distribution (QKD) system with weak coherent source; otherwise, eavesdropper may spy the final key. In this Letter, a stable and monitored active phase randomization scheme for the one-way and two-way QKD system is proposed and demonstrated in experiments. Furthermore, our scheme gives an easy way for Alice to monitor the degree of randomization in experiments. Therefore, we expect our scheme to become a standard part in future QKD systems due to its secure significance and feasibility.

  19. Neutron monitor generated data distributions in quantum variational Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.; Pya, N.

    2016-08-01

    We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.

  20. Reducing financial avalanches by random investments

    NASA Astrophysics Data System (ADS)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk

    2013-12-01

    Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.

Top