Pervasive randomness in physics: an introduction to its modelling and spectral characterisation
NASA Astrophysics Data System (ADS)
Howard, Roy
2017-10-01
An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.
Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes
NASA Astrophysics Data System (ADS)
Orsingher, Enzo; Polito, Federico
2012-08-01
In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.
Rumor Processes in Random Environment on and on Galton-Watson Trees
NASA Astrophysics Data System (ADS)
Bertacchi, Daniela; Zucca, Fabio
2013-11-01
The aim of this paper is to study rumor processes in random environment. In a rumor process a signal starts from the stations of a fixed vertex (the root) and travels on a graph from vertex to vertex. We consider two rumor processes. In the firework process each station, when reached by the signal, transmits it up to a random distance. In the reverse firework process, on the other hand, stations do not send any signal but they “listen” for it up to a random distance. The first random environment that we consider is the deterministic 1-dimensional tree with a random number of stations on each vertex; in this case the root is the origin of . We give conditions for the survival/extinction on almost every realization of the sequence of stations. Later on, we study the processes on Galton-Watson trees with random number of stations on each vertex. We show that if the probability of survival is positive, then there is survival on almost every realization of the infinite tree such that there is at least one station at the root. We characterize the survival of the process in some cases and we give sufficient conditions for survival/extinction.
A mathematical study of a random process proposed as an atmospheric turbulence model
NASA Technical Reports Server (NTRS)
Sidwell, K.
1977-01-01
A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.
Random bits, true and unbiased, from atmospheric turbulence
Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo
2014-01-01
Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499
Studies in astronomical time series analysis: Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response
Stipčević, Mario; Ursin, Rupert
2015-01-01
Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576
Random Walks in a One-Dimensional Lévy Random Environment
NASA Astrophysics Data System (ADS)
Bianchi, Alessandra; Cristadoro, Giampaolo; Lenci, Marco; Ligabò, Marilena
2016-04-01
We consider a generalization of a one-dimensional stochastic process known in the physical literature as Lévy-Lorentz gas. The process describes the motion of a particle on the real line in the presence of a random array of marked points, whose nearest-neighbor distances are i.i.d. and long-tailed (with finite mean but possibly infinite variance). The motion is a continuous-time, constant-speed interpolation of a symmetric random walk on the marked points. We first study the quenched random walk on the point process, proving the CLT and the convergence of all the accordingly rescaled moments. Then we derive the quenched and annealed CLTs for the continuous-time process.
NASA Astrophysics Data System (ADS)
Rusakov, Oleg; Laskin, Michael
2017-06-01
We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.
A qualitative assessment of a random process proposed as an atmospheric turbulence model
NASA Technical Reports Server (NTRS)
Sidwell, K.
1977-01-01
A random process is formed by the product of two Gaussian processes and the sum of that product with a third Gaussian process. The resulting total random process is interpreted as the sum of an amplitude modulated process and a slowly varying, random mean value. The properties of the process are examined, including an interpretation of the process in terms of the physical structure of atmospheric motions. The inclusion of the mean value variation gives an improved representation of the properties of atmospheric motions, since the resulting process can account for the differences in the statistical properties of atmospheric velocity components and their gradients. The application of the process to atmospheric turbulence problems, including the response of aircraft dynamic systems, is examined. The effects of the mean value variation upon aircraft loads are small in most cases, but can be important in the measurement and interpretation of atmospheric turbulence data.
Weak convergence to isotropic complex [Formula: see text] random measure.
Wang, Jun; Li, Yunmeng; Sang, Liheng
2017-01-01
In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.
Scaling Limit of Symmetric Random Walk in High-Contrast Periodic Environment
NASA Astrophysics Data System (ADS)
Piatnitski, A.; Zhizhina, E.
2017-11-01
The paper deals with the asymptotic properties of a symmetric random walk in a high contrast periodic medium in Z^d, d≥1. From the existing homogenization results it follows that under diffusive scaling the limit behaviour of this random walk need not be Markovian. The goal of this work is to show that if in addition to the coordinate of the random walk in Z^d we introduce an extra variable that characterizes the position of the random walk inside the period then the limit dynamics of this two-component process is Markov. We describe the limit process and observe that the components of the limit process are coupled. We also prove the convergence in the path space for the said random walk.
On the mapping associated with the complex representation of functions and processes.
NASA Technical Reports Server (NTRS)
Harger, R. O.
1972-01-01
The mapping between function spaces that is implied by the representation of a real 'bandpass' function by a complex 'low-pass' function is explicitly accepted. The discussion is extended to the representation of stationary random processes where the mapping is between spaces of random processes. This approach clarifies the nature of the complex representation, especially in the case of random processes and, in addition, derives the properties of the complex representation.-
Scaling Limits and Generic Bounds for Exploration Processes
NASA Astrophysics Data System (ADS)
Bermolen, Paola; Jonckheere, Matthieu; Sanders, Jaron
2017-12-01
We consider exploration algorithms of the random sequential adsorption type both for homogeneous random graphs and random geometric graphs based on spatial Poisson processes. At each step, a vertex of the graph becomes active and its neighboring nodes become blocked. Given an initial number of vertices N growing to infinity, we study statistical properties of the proportion of explored (active or blocked) nodes in time using scaling limits. We obtain exact limits for homogeneous graphs and prove an explicit central limit theorem for the final proportion of active nodes, known as the jamming constant, through a diffusion approximation for the exploration process which can be described as a unidimensional process. We then focus on bounding the trajectories of such exploration processes on random geometric graphs, i.e., random sequential adsorption. As opposed to exploration processes on homogeneous random graphs, these do not allow for such a dimensional reduction. Instead we derive a fundamental relationship between the number of explored nodes and the discovered volume in the spatial process, and we obtain generic bounds for the fluid limit and jamming constant: bounds that are independent of the dimension of space and the detailed shape of the volume associated to the discovered node. Lastly, using coupling techinques, we give trajectorial interpretations of the generic bounds.
A new class of random processes with application to helicopter noise
NASA Technical Reports Server (NTRS)
Hardin, Jay C.; Miamee, A. G.
1989-01-01
The concept of dividing random processes into classes (e.g., stationary, locally stationary, periodically correlated, and harmonizable) has long been employed. A new class of random processes is introduced which includes many of these processes as well as other interesting processes which fall into none of the above classes. Such random processes are denoted as linearly correlated. This class is shown to include the familiar stationary and periodically correlated processes as well as many other, both harmonizable and non-harmonizable, nonstationary processes. When a process is linearly correlated for all t and harmonizable, its two-dimensional power spectral density S(x) (omega 1, omega 2) is shown to take a particularly simple form, being non-zero only on lines such that omega 1 to omega 2 = + or - r(k) where the r(k's) are (not necessarily equally spaced) roots of a characteristic function. The relationship of such processes to the class of stationary processes is examined. In addition, the application of such processes in the analysis of typical helicopter noise signals is described.
A new class of random processes with application to helicopter noise
NASA Technical Reports Server (NTRS)
Hardin, Jay C.; Miamee, A. G.
1989-01-01
The concept of dividing random processes into classes (e.g., stationary, locally stationary, periodically correlated, and harmonizable) has long been employed. A new class of random processes is introduced which includes many of these processes as well as other interesting processes which fall into none of the above classes. Such random processes are denoted as linearly correlated. This class is shown to include the familiar stationary and periodically correlated processes as well as many other, both harmonizable and non-harmonizable, nonstationary processes. When a process is linearly correlated for all t and harmonizable, its two-dimensional power spectral density S(x)(omega 1, omega 2) is shown to take a particularly simple form, being non-zero only on lines such that omega 1 to omega 2 = + or - r(k) where the r(k's) are (not necessarily equally spaced) roots of a characteristic function. The relationship of such processes to the class of stationary processes is examined. In addition, the application of such processes in the analysis of typical helicopter noise signals is described.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
Atomic clocks and the continuous-time random-walk
NASA Astrophysics Data System (ADS)
Formichella, Valerio; Camparo, James; Tavella, Patrizia
2017-11-01
Atomic clocks play a fundamental role in many fields, most notably they generate Universal Coordinated Time and are at the heart of all global navigation satellite systems. Notwithstanding their excellent timekeeping performance, their output frequency does vary: it can display deterministic frequency drift; diverse continuous noise processes result in nonstationary clock noise (e.g., random-walk frequency noise, modelled as a Wiener process), and the clock frequency may display sudden changes (i.e., "jumps"). Typically, the clock's frequency instability is evaluated by the Allan or Hadamard variances, whose functional forms can identify the different operative noise processes. Here, we show that the Allan and Hadamard variances of a particular continuous-time random-walk, the compound Poisson process, have the same functional form as for a Wiener process with drift. The compound Poisson process, introduced as a model for observed frequency jumps, is an alternative to the Wiener process for modelling random walk frequency noise. This alternate model fits well the behavior of the rubidium clocks flying on GPS Block-IIR satellites. Further, starting from jump statistics, the model can be improved by considering a more general form of continuous-time random-walk, and this could bring new insights into the physics of atomic clocks.
Random covering of the circle: the configuration-space of the free deposition process
NASA Astrophysics Data System (ADS)
Huillet, Thierry
2003-12-01
Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.
On fatigue crack growth under random loading
NASA Astrophysics Data System (ADS)
Zhu, W. Q.; Lin, Y. K.; Lei, Y.
1992-09-01
A probabilistic analysis of the fatigue crack growth, fatigue life and reliability of a structural or mechanical component is presented on the basis of fracture mechanics and theory of random processes. The material resistance to fatigue crack growth and the time-history of the stress are assumed to be random. Analytical expressions are obtained for the special case in which the random stress is a stationary narrow-band Gaussian random process, and a randomized Paris-Erdogan law is applicable. As an example, the analytical method is applied to a plate with a central crack, and the results are compared with those obtained from digital Monte Carlo simulations.
Mean first-passage times of non-Markovian random walkers in confinement.
Guérin, T; Levernier, N; Bénichou, O; Voituriez, R
2016-06-16
The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.
Mean first-passage times of non-Markovian random walkers in confinement
NASA Astrophysics Data System (ADS)
Guérin, T.; Levernier, N.; Bénichou, O.; Voituriez, R.
2016-06-01
The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.
NASA Astrophysics Data System (ADS)
Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan
2018-03-01
We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.
Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial
ERIC Educational Resources Information Center
Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.
2016-01-01
This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual…
Random Error in Judgment: The Contribution of Encoding and Retrieval Processes
ERIC Educational Resources Information Center
Pleskac, Timothy J.; Dougherty, Michael R.; Rivadeneira, A. Walkyria; Wallsten, Thomas S.
2009-01-01
Theories of confidence judgments have embraced the role random error plays in influencing responses. An important next step is to identify the source(s) of these random effects. To do so, we used the stochastic judgment model (SJM) to distinguish the contribution of encoding and retrieval processes. In particular, we investigated whether dividing…
Some functional limit theorems for compound Cox processes
NASA Astrophysics Data System (ADS)
Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.
2016-06-01
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Some functional limit theorems for compound Cox processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.
2016-06-08
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
A high speed implementation of the random decrement algorithm
NASA Technical Reports Server (NTRS)
Kiraly, L. J.
1982-01-01
The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Zeiler, Thomas A.; Perry, Boyd, III
1989-01-01
This paper describes and illustrates two ways of performing time-correlated gust-load calculations. The first is based on Matched Filter Theory; the second on Random Process Theory. Both approaches yield theoretically identical results and represent novel applications of the theories, are computationally fast, and may be applied to other dynamic-response problems. A theoretical development and example calculations using both Matched Filter Theory and Random Process Theory approaches are presented.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Zeiler, Thomas A.; Perry, Boyd, III
1989-01-01
Two ways of performing time-correlated gust-load calculations are described and illustrated. The first is based on Matched Filter Theory; the second on Random Process Theory. Both approaches yield theoretically identical results and represent novel applications of the theories, are computationally fast, and may be applied to other dynamic-response problems. A theoretical development and example calculations using both Matched Filter Theory and Random Process Theory approaches are presented.
Statistical properties of several models of fractional random point processes
NASA Astrophysics Data System (ADS)
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Temporal changes in randomness of bird communities across Central Europe.
Renner, Swen C; Gossner, Martin M; Kahl, Tiemo; Kalko, Elisabeth K V; Weisser, Wolfgang W; Fischer, Markus; Allan, Eric
2014-01-01
Many studies have examined whether communities are structured by random or deterministic processes, and both are likely to play a role, but relatively few studies have attempted to quantify the degree of randomness in species composition. We quantified, for the first time, the degree of randomness in forest bird communities based on an analysis of spatial autocorrelation in three regions of Germany. The compositional dissimilarity between pairs of forest patches was regressed against the distance between them. We then calculated the y-intercept of the curve, i.e. the 'nugget', which represents the compositional dissimilarity at zero spatial distance. We therefore assume, following similar work on plant communities, that this represents the degree of randomness in species composition. We then analysed how the degree of randomness in community composition varied over time and with forest management intensity, which we expected to reduce the importance of random processes by increasing the strength of environmental drivers. We found that a high portion of the bird community composition could be explained by chance (overall mean of 0.63), implying that most of the variation in local bird community composition is driven by stochastic processes. Forest management intensity did not consistently affect the mean degree of randomness in community composition, perhaps because the bird communities were relatively insensitive to management intensity. We found a high temporal variation in the degree of randomness, which may indicate temporal variation in assembly processes and in the importance of key environmental drivers. We conclude that the degree of randomness in community composition should be considered in bird community studies, and the high values we find may indicate that bird community composition is relatively hard to predict at the regional scale.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Structure and Randomness of Continuous-Time, Discrete-Event Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
2014-04-09
Excited by Input Random Processes Igor Baseski1,2, Dorin Drignei3, Zissimos P. Mourelatos1, Monica Majcher1 Oakland University, Rochester MI 48309 1...CONTRACT NUMBER W56HZV-04-2-0001 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Igor Baseski; Dorin Drignei; Zissimos Mourelatos; Monica
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
Facilitation of learning induced by both random and gradual visuomotor task variation
Braun, Daniel A.; Wolpert, Daniel M.
2012-01-01
Motor task variation has been shown to be a key ingredient in skill transfer, retention, and structural learning. However, many studies only compare training of randomly varying tasks to either blocked or null training, and it is not clear how experiencing different nonrandom temporal orderings of tasks might affect the learning process. Here we study learning in human subjects who experience the same set of visuomotor rotations, evenly spaced between −60° and +60°, either in a random order or in an order in which the rotation angle changed gradually. We compared subsequent learning of three test blocks of +30°→−30°→+30° rotations. The groups that underwent either random or gradual training showed significant (P < 0.01) facilitation of learning in the test blocks compared with a control group who had not experienced any visuomotor rotations before. We also found that movement initiation times in the random group during the test blocks were significantly (P < 0.05) lower than for the gradual or the control group. When we fit a state-space model with fast and slow learning processes to our data, we found that the differences in performance in the test block were consistent with the gradual or random task variation changing the learning and retention rates of only the fast learning process. Such adaptation of learning rates may be a key feature of ongoing meta-learning processes. Our results therefore suggest that both gradual and random task variation can induce meta-learning and that random learning has an advantage in terms of shorter initiation times, suggesting less reliance on cognitive processes. PMID:22131385
van Atteveldt, Nienke; Musacchia, Gabriella; Zion-Golumbic, Elana; Sehatpour, Pejman; Javitt, Daniel C.; Schroeder, Charles
2015-01-01
The brain’s fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts. PMID:26579044
Resonance energy transfer process in nanogap-based dual-color random lasing
NASA Astrophysics Data System (ADS)
Shi, Xiaoyu; Tong, Junhua; Liu, Dahe; Wang, Zhaona
2017-04-01
The resonance energy transfer (RET) process between Rhodamine 6G and oxazine in the nanogap-based random systems is systematically studied by revealing the variations and fluctuations of RET coefficients with pump power density. Three working regions stable fluorescence, dynamic laser, and stable laser are thus demonstrated in the dual-color random systems. The stable RET coefficients in fluorescence and lasing regions are generally different and greatly dependent on the donor concentration and the donor-acceptor ratio. These results may provide a way to reveal the energy distribution regulars in the random system and to design the tunable multi-color coherent random lasers for colorful imaging.
Multiple Scattering in Random Mechanical Systems and Diffusion Approximation
NASA Astrophysics Data System (ADS)
Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun
2013-10-01
This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
Motion Among Random Obstacles on a Hyperbolic Space
NASA Astrophysics Data System (ADS)
Orsingher, Enzo; Ricciuti, Costantino; Sisti, Francesco
2016-02-01
We consider the motion of a particle along the geodesic lines of the Poincaré half-plane. The particle is specularly reflected when it hits randomly-distributed obstacles that are assumed to be motionless. This is the hyperbolic version of the well-known Lorentz Process studied in the Euclidean context. We analyse the limit in which the density of the obstacles increases to infinity and the size of each obstacle vanishes: under a suitable scaling, we prove that our process converges to a Markovian process, namely a random flight on the hyperbolic manifold.
ERIC Educational Resources Information Center
Felce, David; Perry, Jonathan
2004-01-01
Background: The aims were to: (i) explore the association between age and size of setting and staffing per resident; and (ii) report resident and setting characteristics, and indicators of service process and resident activity for a national random sample of staffed housing provision. Methods: Sixty settings were selected randomly from those…
Two-time scale subordination in physical processes with long-term memory
NASA Astrophysics Data System (ADS)
Stanislavsky, Aleksander; Weron, Karina
2008-03-01
We describe dynamical processes in continuous media with a long-term memory. Our consideration is based on a stochastic subordination idea and concerns two physical examples in detail. First we study a temporal evolution of the species concentration in a trapping reaction in which a diffusing reactant is surrounded by a sea of randomly moving traps. The analysis uses the random-variable formalism of anomalous diffusive processes. We find that the empirical trapping-reaction law, according to which the reactant concentration decreases in time as a product of an exponential and a stretched exponential function, can be explained by a two-time scale subordination of random processes. Another example is connected with a state equation for continuous media with memory. If the pressure and the density of a medium are subordinated in two different random processes, then the ordinary state equation becomes fractional with two-time scales. This allows one to arrive at the Bagley-Torvik type of state equation.
Interplay of Determinism and Randomness: From Irreversibility to Chaos, Fractals, and Stochasticity
NASA Astrophysics Data System (ADS)
Tsonis, A.
2017-12-01
We will start our discussion into randomness by looking exclusively at our formal mathematical system to show that even in this pure and strictly logical system one cannot do away with randomness. By employing simple mathematical models, we will identify the three possible sources of randomness: randomness due to inability to find the rules (irreversibility), randomness due to inability to have infinite power (chaos), and randomness due to stochastic processes. Subsequently we will move from the mathematical system to our physical world to show that randomness, through the quantum mechanical character of small scales, through chaos, and because of the second law of thermodynamics, is an intrinsic property of nature as well. We will subsequently argue that the randomness in the physical world is consistent with the three sources of randomness suggested from the study of simple mathematical systems. Many examples ranging from purely mathematical to natural processes will be presented, which clearly demonstrate how the combination of rules and randomness produces the world we live in. Finally, the principle of least effort or the principle of minimum energy consumption will be suggested as the underlying principle behind this symbiosis between determinism and randomness.
NASA Astrophysics Data System (ADS)
Vanmarcke, Erik
1983-03-01
Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.
Random numbers certified by Bell's theorem.
Pironio, S; Acín, A; Massar, S; de la Giroday, A Boyer; Matsukevich, D N; Maunz, P; Olmschenk, S; Hayes, D; Luo, L; Manning, T A; Monroe, C
2010-04-15
Randomness is a fundamental feature of nature and a valuable resource for applications ranging from cryptography and gambling to numerical simulation of physical and biological systems. Random numbers, however, are difficult to characterize mathematically, and their generation must rely on an unpredictable physical process. Inaccuracies in the theoretical modelling of such processes or failures of the devices, possibly due to adversarial attacks, limit the reliability of random number generators in ways that are difficult to control and detect. Here, inspired by earlier work on non-locality-based and device-independent quantum information processing, we show that the non-local correlations of entangled quantum particles can be used to certify the presence of genuine randomness. It is thereby possible to design a cryptographically secure random number generator that does not require any assumption about the internal working of the device. Such a strong form of randomness generation is impossible classically and possible in quantum systems only if certified by a Bell inequality violation. We carry out a proof-of-concept demonstration of this proposal in a system of two entangled atoms separated by approximately one metre. The observed Bell inequality violation, featuring near perfect detection efficiency, guarantees that 42 new random numbers are generated with 99 per cent confidence. Our results lay the groundwork for future device-independent quantum information experiments and for addressing fundamental issues raised by the intrinsic randomness of quantum theory.
Melnikov processes and chaos in randomly perturbed dynamical systems
NASA Astrophysics Data System (ADS)
Yagasaki, Kazuyuki
2018-07-01
We consider a wide class of randomly perturbed systems subjected to stationary Gaussian processes and show that chaotic orbits exist almost surely under some nondegenerate condition, no matter how small the random forcing terms are. This result is very contrasting to the deterministic forcing case, in which chaotic orbits exist only if the influence of the forcing terms overcomes that of the other terms in the perturbations. To obtain the result, we extend Melnikov’s method and prove that the corresponding Melnikov functions, which we call the Melnikov processes, have infinitely many zeros, so that infinitely many transverse homoclinic orbits exist. In addition, a theorem on the existence and smoothness of stable and unstable manifolds is given and the Smale–Birkhoff homoclinic theorem is extended in an appropriate form for randomly perturbed systems. We illustrate our theory for the Duffing oscillator subjected to the Ornstein–Uhlenbeck process parametrically.
Solution-Processed Carbon Nanotube True Random Number Generator.
Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C
2017-08-09
With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.
Stochastic arbitrage return and its implication for option pricing
NASA Astrophysics Data System (ADS)
Fedotov, Sergei; Panayides, Stephanos
2005-01-01
The purpose of this work is to explore the role that random arbitrage opportunities play in pricing financial derivatives. We use a non-equilibrium model to set up a stochastic portfolio, and for the random arbitrage return, we choose a stationary ergodic random process rapidly varying in time. We exploit the fact that option price and random arbitrage returns change on different time scales which allows us to develop an asymptotic pricing theory involving the central limit theorem for random processes. We restrict ourselves to finding pricing bands for options rather than exact prices. The resulting pricing bands are shown to be independent of the detailed statistical characteristics of the arbitrage return. We find that the volatility “smile” can also be explained in terms of random arbitrage opportunities.
Noise, chaos, and (ɛ, τ)-entropy per unit time
NASA Astrophysics Data System (ADS)
Gaspard, Pierre; Wang, Xiao-Jing
1993-12-01
The degree of dynamical randomness of different time processes is characterized in terms of the (ε, τ)-entropy per unit time. The (ε, τ)-entropy is the amount of information generated per unit time, at different scales τ of time and ε of the observables. This quantity generalizes the Kolmogorov-Sinai entropy per unit time from deterministic chaotic processes, to stochastic processes such as fluctuations in mesoscopic physico-chemical phenomena or strong turbulence in macroscopic spacetime dynamics. The random processes that are characterized include chaotic systems, Bernoulli and Markov chains, Poisson and birth-and-death processes, Ornstein-Uhlenbeck and Yaglom noises, fractional Brownian motions, different regimes of hydrodynamical turbulence, and the Lorentz-Boltzmann process of nonequilibrium statistical mechanics. We also extend the (ε, τ)-entropy to spacetime processes like cellular automata, Conway's game of life, lattice gas automata, coupled maps, spacetime chaos in partial differential equations, as well as the ideal, the Lorentz, and the hard sphere gases. Through these examples it is demonstrated that the (ε, τ)-entropy provides a unified quantitative measure of dynamical randomness to both chaos and noises, and a method to detect transitions between dynamical states of different degrees of randomness as a parameter of the system is varied.
Money creation process in a random redistribution model
NASA Astrophysics Data System (ADS)
Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan
2014-01-01
In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Analysis of dynamic system response to product random processes
NASA Technical Reports Server (NTRS)
Sidwell, K.
1978-01-01
The response of dynamic systems to the product of two independent Gaussian random processes is developed by use of the Fokker-Planck and associated moment equations. The development is applied to the amplitude modulated process which is used to model atmospheric turbulence in aeronautical applications. The exact solution for the system response is compared with the solution obtained by the quasi-steady approximation which omits the dynamic properties of the random amplitude modulation. The quasi-steady approximation is valid as a limiting case of the exact solution for the dynamic response of linear systems to amplitude modulated processes. In the nonlimiting case the quasi-steady approximation can be invalid for dynamic systems with low damping.
Results from the Biology Concept Inventory (BCI), and what they mean for biogeoscience literacy.
NASA Astrophysics Data System (ADS)
Garvin-Doxas, K.; Klymkowsky, M.
2008-12-01
While researching the Biology Concept Inventory (BCI) we found that a wide class of student difficulties in genetics and molecular biology can be traced to deep-seated misconceptions about random processes and molecular interactions. Students believe that random processes are inefficient, while biological systems are very efficient, and are therefore quick to propose their own rational explanations for various processes (from diffusion to evolution). These rational explanations almost always make recourse to a driver (natural selection in genetics, or density gradients in molecular biology) with the process only taking place when the driver is present. The concept of underlying random processes that are taking place all the time giving rise to emergent behaviour is almost totally absent. Even students who have advanced or college physics, and can discuss diffusion correctly in that context, cannot make the transfer to biological processes. Furthermore, their understanding of molecular interactions is purely geometric, with a lock-and-key model (rather than an energy minimization model) that does not allow for the survival of slight variations of the "correct" molecule. Together with the dominant misconception about random processes, this results in a strong conceptual barrier in understanding evolutionary processes, and can frustrate the success of education programs.
Investigating the Randomness of Numbers
ERIC Educational Resources Information Center
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
An invariance property of generalized Pearson random walks in bounded geometries
NASA Astrophysics Data System (ADS)
Mazzolo, Alain
2009-03-01
Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.
Minimalist design of a robust real-time quantum random number generator
NASA Astrophysics Data System (ADS)
Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.
2015-08-01
We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.
Random numbers from vacuum fluctuations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com; Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543
2016-07-25
We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
Knowledge translation interventions for critically ill patients: a systematic review*.
Sinuff, Tasnim; Muscedere, John; Adhikari, Neill K J; Stelfox, Henry T; Dodek, Peter; Heyland, Daren K; Rubenfeld, Gordon D; Cook, Deborah J; Pinto, Ruxandra; Manoharan, Venika; Currie, Jan; Cahill, Naomi; Friedrich, Jan O; Amaral, Andre; Piquette, Dominique; Scales, Damon C; Dhanani, Sonny; Garland, Allan
2013-11-01
We systematically reviewed ICU-based knowledge translation studies to assess the impact of knowledge translation interventions on processes and outcomes of care. We searched electronic databases (to July, 2010) without language restrictions and hand-searched reference lists of relevant studies and reviews. Two reviewers independently identified randomized controlled trials and observational studies comparing any ICU-based knowledge translation intervention (e.g., protocols, guidelines, and audit and feedback) to management without a knowledge translation intervention. We focused on clinical topics that were addressed in greater than or equal to five studies. Pairs of reviewers abstracted data on the clinical topic, knowledge translation intervention(s), process of care measures, and patient outcomes. For each individual or combination of knowledge translation intervention(s) addressed in greater than or equal to three studies, we summarized each study using median risk ratio for dichotomous and standardized mean difference for continuous process measures. We used random-effects models. Anticipating a small number of randomized controlled trials, our primary meta-analyses included randomized controlled trials and observational studies. In separate sensitivity analyses, we excluded randomized controlled trials and collapsed protocols, guidelines, and bundles into one category of intervention. We conducted meta-analyses for clinical outcomes (ICU and hospital mortality, ventilator-associated pneumonia, duration of mechanical ventilation, and ICU length of stay) related to interventions that were associated with improvements in processes of care. From 11,742 publications, we included 119 investigations (seven randomized controlled trials, 112 observational studies) on nine clinical topics. Interventions that included protocols with or without education improved continuous process measures (seven observational studies and one randomized controlled trial; standardized mean difference [95% CI]: 0.26 [0.1, 0.42]; p = 0.001 and four observational studies and one randomized controlled trial; 0.83 [0.37, 1.29]; p = 0.0004, respectively). Heterogeneity among studies within topics ranged from low to extreme. The exclusion of randomized controlled trials did not change our results. Single-intervention and lower-quality studies had higher standardized mean differences compared to multiple-intervention and higher-quality studies (p = 0.013 and 0.016, respectively). There were no associated improvements in clinical outcomes. Knowledge translation interventions in the ICU that include protocols with or without education are associated with the greatest improvements in processes of critical care.
Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.
Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai
2017-02-20
Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.
Information content versus word length in random typing
NASA Astrophysics Data System (ADS)
Ferrer-i-Cancho, Ramon; Moscoso del Prado Martín, Fermín
2011-12-01
Recently, it has been claimed that a linear relationship between a measure of information content and word length is expected from word length optimization and it has been shown that this linearity is supported by a strong correlation between information content and word length in many languages (Piantadosi et al 2011 Proc. Nat. Acad. Sci. 108 3825). Here, we study in detail some connections between this measure and standard information theory. The relationship between the measure and word length is studied for the popular random typing process where a text is constructed by pressing keys at random from a keyboard containing letters and a space behaving as a word delimiter. Although this random process does not optimize word lengths according to information content, it exhibits a linear relationship between information content and word length. The exact slope and intercept are presented for three major variants of the random typing process. A strong correlation between information content and word length can simply arise from the units making a word (e.g., letters) and not necessarily from the interplay between a word and its context as proposed by Piantadosi and co-workers. In itself, the linear relation does not entail the results of any optimization process.
Nonstationary envelope process and first excursion probability
NASA Technical Reports Server (NTRS)
Yang, J.
1972-01-01
A definition of the envelope of nonstationary random processes is proposed. The establishment of the envelope definition makes it possible to simulate the nonstationary random envelope directly. Envelope statistics, such as the density function, joint density function, moment function, and level crossing rate, which are relevent to analyses of catastrophic failure, fatigue, and crack propagation in structures, are derived. Applications of the envelope statistics to the prediction of structural reliability under random loadings are discussed in detail.
NASA Astrophysics Data System (ADS)
Shankar Kumar, Ravi; Goswami, A.
2015-06-01
The article scrutinises the learning effect of the unit production time on optimal lot size for the uncertain and imprecise imperfect production process, wherein shortages are permissible and partially backlogged. Contextually, we contemplate the fuzzy chance of production process shifting from an 'in-control' state to an 'out-of-control' state and re-work facility of imperfect quality of produced items. The elapsed time until the process shifts is considered as a fuzzy random variable, and consequently, fuzzy random total cost per unit time is derived. Fuzzy expectation and signed distance method are used to transform the fuzzy random cost function into an equivalent crisp function. The results are illustrated with the help of numerical example. Finally, sensitivity analysis of the optimal solution with respect to major parameters is carried out.
Search for Directed Networks by Different Random Walk Strategies
NASA Astrophysics Data System (ADS)
Zhu, Zi-Qi; Jin, Xiao-Ling; Huang, Zhi-Long
2012-03-01
A comparative study is carried out on the efficiency of five different random walk strategies searching on directed networks constructed based on several typical complex networks. Due to the difference in search efficiency of the strategies rooted in network clustering, the clustering coefficient in a random walker's eye on directed networks is defined and computed to be half of the corresponding undirected networks. The search processes are performed on the directed networks based on Erdös—Rényi model, Watts—Strogatz model, Barabási—Albert model and clustered scale-free network model. It is found that self-avoiding random walk strategy is the best search strategy for such directed networks. Compared to unrestricted random walk strategy, path-iteration-avoiding random walks can also make the search process much more efficient. However, no-triangle-loop and no-quadrangle-loop random walks do not improve the search efficiency as expected, which is different from those on undirected networks since the clustering coefficient of directed networks are smaller than that of undirected networks.
NASA Astrophysics Data System (ADS)
Csáki, Endre; Csörgő, Miklós; Földes, Antónia; Révész, Pál
2018-04-01
We consider random walks on the square lattice of the plane along the lines of Heyde (J Stat Phys 27:721-730, 1982, Stochastic processes, Springer, New York, 1993) and den Hollander (J Stat Phys 75:891-918, 1994), whose studies have in part been inspired by the so-called transport phenomena of statistical physics. Two-dimensional anisotropic random walks with anisotropic density conditions á la Heyde (J Stat Phys 27:721-730, 1982, Stochastic processes, Springer, New York, 1993) yield fixed column configurations and nearest-neighbour random walks in a random environment on the square lattice of the plane as in den Hollander (J Stat Phys 75:891-918, 1994) result in random column configurations. In both cases we conclude simultaneous weak Donsker and strong Strassen type invariance principles in terms of appropriately constructed anisotropic Brownian motions on the plane, with self-contained proofs in both cases. The style of presentation throughout will be that of a semi-expository survey of related results in a historical context.
NASA Astrophysics Data System (ADS)
Csáki, Endre; Csörgő, Miklós; Földes, Antónia; Révész, Pál
2018-06-01
We consider random walks on the square lattice of the plane along the lines of Heyde (J Stat Phys 27:721-730, 1982, Stochastic processes, Springer, New York, 1993) and den Hollander (J Stat Phys 75:891-918, 1994), whose studies have in part been inspired by the so-called transport phenomena of statistical physics. Two-dimensional anisotropic random walks with anisotropic density conditions á la Heyde (J Stat Phys 27:721-730, 1982, Stochastic processes, Springer, New York, 1993) yield fixed column configurations and nearest-neighbour random walks in a random environment on the square lattice of the plane as in den Hollander (J Stat Phys 75:891-918, 1994) result in random column configurations. In both cases we conclude simultaneous weak Donsker and strong Strassen type invariance principles in terms of appropriately constructed anisotropic Brownian motions on the plane, with self-contained proofs in both cases. The style of presentation throughout will be that of a semi-expository survey of related results in a historical context.
An introduction to chaotic and random time series analysis
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1989-01-01
The origin of chaotic behavior and the relation of chaos to randomness are explained. Two mathematical results are described: (1) a representation theorem guarantees the existence of a specific time-domain model for chaos and addresses the relation between chaotic, random, and strictly deterministic processes; (2) a theorem assures that information on the behavior of a physical system in its complete state space can be extracted from time-series data on a single observable. Focus is placed on an important connection between the dynamical state space and an observable time series. These two results lead to a practical deconvolution technique combining standard random process modeling methods with new embedded techniques.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-05-01
Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.
NASA Astrophysics Data System (ADS)
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
A new fundamental model of moving particle for reinterpreting Schroedinger equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umar, Muhamad Darwis
2012-06-20
The study of Schroedinger equation based on a hypothesis that every particle must move randomly in a quantum-sized volume has been done. In addition to random motion, every particle can do relative motion through the movement of its quantum-sized volume. On the other way these motions can coincide. In this proposed model, the random motion is one kind of intrinsic properties of the particle. The every change of both speed of randomly intrinsic motion and or the velocity of translational motion of a quantum-sized volume will represent a transition between two states, and the change of speed of randomly intrinsicmore » motion will generate diffusion process or Brownian motion perspectives. Diffusion process can take place in backward and forward processes and will represent a dissipative system. To derive Schroedinger equation from our hypothesis we use time operator introduced by Nelson. From a fundamental analysis, we find out that, naturally, we should view the means of Newton's Law F(vector sign) = ma(vector sign) as no an external force, but it is just to describe both the presence of intrinsic random motion and the change of the particle energy.« less
Graphic Simulations of the Poisson Process.
1982-10-01
RANDOM NUMBERS AND TRANSFORMATIONS..o......... 11 Go THE RANDOM NUMBERGENERATOR....... .oo..... 15 III. POISSON PROCESSES USER GUIDE....oo.ooo ......... o...again. In the superimposed mode, two Poisson processes are active, each with a different rate parameter, (call them Type I and Type II with respective...occur. The value ’p’ is generated by the following equation where ’Li’ and ’L2’ are the rates of the two Poisson processes ; p = Li / (Li + L2) The value
Casino physics in the classroom
NASA Astrophysics Data System (ADS)
Whitney, Charles A.
1986-12-01
This article describes a seminar on the elements of probability and random processes that is computer centered and focuses on Monte Carlo simulations of processes such as coin flips, random walks on a lattice, and the behavior of photons and atoms in a gas. Representative computer programs are also described.
Formulation and Application of the Hierarchical Generalized Random-Situation Random-Weight MIRID
ERIC Educational Resources Information Center
Hung, Lai-Fa
2011-01-01
The process-component approach has become quite popular for examining many psychological concepts. A typical example is the model with internal restrictions on item difficulty (MIRID) described by Butter (1994) and Butter, De Boeck, and Verhelst (1998). This study proposes a hierarchical generalized random-situation random-weight MIRID. The…
Monte Carlo based toy model for fission process
NASA Astrophysics Data System (ADS)
Kurniadi, R.; Waris, A.; Viridi, S.
2014-09-01
There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.
Continuous-time random walks with reset events. Historical background and new perspectives
NASA Astrophysics Data System (ADS)
Montero, Miquel; Masó-Puigdellosas, Axel; Villarroel, Javier
2017-09-01
In this paper, we consider a stochastic process that may experience random reset events which relocate the system to its starting position. We focus our attention on a one-dimensional, monotonic continuous-time random walk with a constant drift: the process moves in a fixed direction between the reset events, either by the effect of the random jumps, or by the action of a deterministic bias. However, the orientation of its motion is randomly determined after each restart. As a result of these alternating dynamics, interesting properties do emerge. General formulas for the propagator as well as for two extreme statistics, the survival probability and the mean first-passage time, are also derived. The rigor of these analytical results is verified by numerical estimations, for particular but illuminating examples.
Aircraft adaptive learning control
NASA Technical Reports Server (NTRS)
Lee, P. S. T.; Vanlandingham, H. F.
1979-01-01
The optimal control theory of stochastic linear systems is discussed in terms of the advantages of distributed-control systems, and the control of randomly-sampled systems. An optimal solution to longitudinal control is derived and applied to the F-8 DFBW aircraft. A randomly-sampled linear process model with additive process and noise is developed.
Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case
NASA Astrophysics Data System (ADS)
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.
2010-06-01
Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.
Study on Stationarity of Random Load Spectrum Based on the Special Road
NASA Astrophysics Data System (ADS)
Yan, Huawen; Zhang, Weigong; Wang, Dong
2017-09-01
In the special road quality assessment method, there is a method using a wheel force sensor, the essence of this method is collecting the load spectrum of the car to reflect the quality of road. According to the definition of stochastic process, it is easy to find that the load spectrum is a stochastic process. However, the analysis method and application range of different random processes are very different, especially in engineering practice, which will directly affect the design and development of the experiment. Therefore, determining the type of a random process has important practical significance. Based on the analysis of the digital characteristics of road load spectrum, this paper determines that the road load spectrum in this experiment belongs to a stationary stochastic process, paving the way for the follow-up modeling and feature extraction of the special road.
The Coalescent Process in Models with Selection
Kaplan, N. L.; Darden, T.; Hudson, R. R.
1988-01-01
Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685
Random Number Generation for High Performance Computing
2015-01-01
number streams, a quality metric for the parallel random number streams. * * * * * Atty. Dkt . No.: 5660-14400 Customer No. 35690 Eric B. Meyertons...responsibility to ensure timely payment of maintenance fees when due. Pagel of3 PTOL-85 (Rev. 02/11) Atty. Dkt . No.: 5660-14400 Page 1 Meyertons...with each subtask executed by a separate thread or process (henceforth, process). Each process has Atty. Dkt . No.: 5660-14400 Page 2 Meyertons
Slow diffusion by Markov random flights
NASA Astrophysics Data System (ADS)
Kolesnik, Alexander D.
2018-06-01
We present a conception of the slow diffusion processes in the Euclidean spaces Rm , m ≥ 1, based on the theory of random flights with small constant speed that are driven by a homogeneous Poisson process of small rate. The slow diffusion condition that, on long time intervals, leads to the stationary distributions, is given. The stationary distributions of slow diffusion processes in some Euclidean spaces of low dimensions, are presented.
Quantum Random Number Generation Using a Quanta Image Sensor
Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.
2016-01-01
A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698
NASA Astrophysics Data System (ADS)
Xue, Xiaofeng
2016-12-01
In this paper we are concerned with the contact process with random recovery rates on open clusters of bond percolation on Z^d. Let ξ be a random variable such that P(ξ ≥ 1)=1, which ensures E1/ξ <+∞, then we assign i. i. d. copies of ξ on the vertices as the random recovery rates. Assuming that each edge is open with probability p and the infection can only spread through the open edges, then we obtain that limsup _{d→ +∞}λ _d≤ λ _c=1/pE{1}/{ξ}, where λ _d is the critical value of the process on Z^d, i.e., the maximum of the infection rates with which the infection dies out with probability one when only the origin is infected at t=0. To prove the above main result, we show that the following phase transition occurs. Assuming that lceil log drceil vertices are infected at t=0, where these vertices can be located anywhere, then when the infection rate λ >λ _c, the process survives with high probability as d→ +∞ while when λ <λ _c, the process dies out at time O(log d) with high probability.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore
2011-12-01
Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.
Jepson, Marcus; Elliott, Daisy; Conefrey, Carmel; Wade, Julia; Rooshenas, Leila; Wilson, Caroline; Beard, David; Blazeby, Jane M; Birtle, Alison; Halliday, Alison; Stein, Rob; Donovan, Jenny L
2018-07-01
To explore how the concept of randomization is described by clinicians and understood by patients in randomized controlled trials (RCTs) and how it contributes to patient understanding and recruitment. Qualitative analysis of 73 audio recordings of recruitment consultations from five, multicenter, UK-based RCTs with identified or anticipated recruitment difficulties. One in 10 appointments did not include any mention of randomization. Most included a description of the method or process of allocation. Descriptions often made reference to gambling-related metaphors or similes, or referred to allocation by a computer. Where reference was made to a computer, some patients assumed that they would receive the treatment that was "best for them". Descriptions of the rationale for randomization were rarely present and often only came about as a consequence of patients questioning the reason for a random allocation. The methods and processes of randomization were usually described by recruiters, but often without clarity, which could lead to patient misunderstanding. The rationale for randomization was rarely mentioned. Recruiters should avoid problematic gambling metaphors and illusions of agency in their explanations and instead focus on clearer descriptions of the rationale and method of randomization to ensure patients are better informed about randomization and RCT participation. Copyright © 2018 University of Bristol. Published by Elsevier Inc. All rights reserved.
Krieger, Janice L; Palmer-Wackerly, Angela; Dailey, Phokeng M; Krok-Schoen, Jessica L; Schoenberg, Nancy E; Paskett, Electra D
2015-12-01
Comprehension of randomization is a vital, but understudied, component of informed consent to participate in cancer randomized clinical trials (RCTs). This study examines patient comprehension of the randomization process as well as sources of ongoing uncertainty that may inhibit a patient's ability to provide informed consent to participate in RCTs. Cancer patients living in rural Appalachia who were offered an opportunity to participate in a cancer treatment RCT completed in-depth interviews and a brief survey. No systematic differences in randomization comprehension between patients who consented and those who declined participation in a cancer RCT were detected. Comprehension is conceptually distinct from uncertainty, with patients who had both high and low comprehension experiencing randomization-related uncertainty. Uncertainty about randomization was found to have cognitive and affective dimensions. Not all patients enrolling in RCTs have a sufficient understanding of the randomization process to provide informed consent. Healthcare providers need to be aware of the different types of randomization-related uncertainty. Efforts to improve informed consent to participate in RCTs should focus on having patients teach back their understanding of randomization. This practice could yield valuable information about the patient's cognitive and affective understanding of randomization as well as opportunities to correct misperceptions. Education about RCTs should reflect patient expectations of individualized care by explaining how all treatments being compared are appropriate to the specifics of a patient's disease.
Reliability analysis of structures under periodic proof tests in service
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1976-01-01
A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.
Implementing traceability using particle randomness-based textile printed tags
NASA Astrophysics Data System (ADS)
Agrawal, T. K.; Koehl, L.; Campagne, C.
2017-10-01
This article introduces a random particle-based traceability tag for textiles. The proposed tag not only act as a unique signature for the corresponding textile product but also possess the features such as easy to manufacture and hard to copy. It seeks applications in brand authentication and traceability in textile and clothing (T&C) supply chain. A prototype has been developed by screen printing process, in which micron-scale particles were mixed with the printing paste and printed on cotton fabrics to attain required randomness. To encode the randomness, the image of the developed tag was taken and analyzed using image processing. The randomness of the particles acts as a product key or unique signature which is required to decode the tag. Finally, washing and abrasion resistance tests were conducted to check the durability of the printed tag.
Brown, Alexandra R; Gajewski, Byron J; Aaronson, Lauren S; Mudaranthakam, Dinesh Pal; Hunt, Suzanne L; Berry, Scott M; Quintana, Melanie; Pasnoor, Mamatha; Dimachkie, Mazen M; Jawdat, Omar; Herbelin, Laura; Barohn, Richard J
2016-08-31
In the last few decades, the number of trials using Bayesian methods has grown rapidly. Publications prior to 1990 included only three clinical trials that used Bayesian methods, but that number quickly jumped to 19 in the 1990s and to 99 from 2000 to 2012. While this literature provides many examples of Bayesian Adaptive Designs (BAD), none of the papers that are available walks the reader through the detailed process of conducting a BAD. This paper fills that gap by describing the BAD process used for one comparative effectiveness trial (Patient Assisted Intervention for Neuropathy: Comparison of Treatment in Real Life Situations) that can be generalized for use by others. A BAD was chosen with efficiency in mind. Response-adaptive randomization allows the potential for substantially smaller sample sizes, and can provide faster conclusions about which treatment or treatments are most effective. An Internet-based electronic data capture tool, which features a randomization module, facilitated data capture across study sites and an in-house computation software program was developed to implement the response-adaptive randomization. A process for adapting randomization with minimal interruption to study sites was developed. A new randomization table can be generated quickly and can be seamlessly integrated in the data capture tool with minimal interruption to study sites. This manuscript is the first to detail the technical process used to evaluate a multisite comparative effectiveness trial using adaptive randomization. An important opportunity for the application of Bayesian trials is in comparative effectiveness trials. The specific case study presented in this paper can be used as a model for conducting future clinical trials using a combination of statistical software and a web-based application. ClinicalTrials.gov Identifier: NCT02260388 , registered on 6 October 2014.
USDA-ARS?s Scientific Manuscript database
This randomized, double-blinded, clinical trial assessed the effect of high hydrostatic pressure processing (HPP) on genogroup I.1 human norovirus (HuNoV) inactivation in virus-seeded oysters when ingested by subjects. The safety and efficacy of HPP treatments were assessed in three study phases wi...
78 FR 57033 - United States Standards for Condition of Food Containers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-17
... containers during production. Stationary lot sampling is the process of randomly selecting sample units from.... * * * * * Stationary lot sampling. The process of randomly selecting sample units from a lot whose production has been... less than \\1/16\\-inch Stringy seal (excessive plastic threads showing at edge of seal 222 area...
Early stage hot spot analysis through standard cell base random pattern generation
NASA Astrophysics Data System (ADS)
Jeon, Joong-Won; Song, Jaewan; Kim, Jeong-Lim; Park, Seongyul; Yang, Seung-Hune; Lee, Sooryong; Kang, Hokyu; Madkour, Kareem; ElManhawy, Wael; Lee, SeungJo; Kwan, Joe
2017-04-01
Due to limited availability of DRC clean patterns during the process and RET recipe development, OPC recipes are not tested with high pattern coverage. Various kinds of pattern can help OPC engineer to detect sensitive patterns to lithographic effects. Random pattern generation is needed to secure robust OPC recipe. However, simple random patterns without considering real product layout style can't cover patterning hotspot in production levels. It is not effective to use them for OPC optimization thus it is important to generate random patterns similar to real product patterns. This paper presents a strategy for generating random patterns based on design architecture information and preventing hotspot in early process development stage through a tool called Layout Schema Generator (LSG). Using LSG, we generate standard cell based on random patterns reflecting real design cell structure - fin pitch, gate pitch and cell height. The output standard cells from LSG are applied to an analysis methodology to assess their hotspot severity by assigning a score according to their optical image parameters - NILS, MEEF, %PV band and thus potential hotspots can be defined by determining their ranking. This flow is demonstrated on Samsung 7nm technology optimizing OPC recipe and early enough in the process avoiding using problematic patterns.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks
2006-09-01
time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by
Renormalized Energy Concentration in Random Matrices
NASA Astrophysics Data System (ADS)
Borodin, Alexei; Serfaty, Sylvia
2013-05-01
We define a "renormalized energy" as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line. The definition is inspired by ideas of Sandier and Serfaty (From the Ginzburg-Landau model to vortex lattice problems, 2012; 1D log-gases and the renormalized energy, 2013). Roughly speaking, it is obtained by subtracting two leading terms from the Coulomb potential on a growing number of charges. The functional is expected to be a good measure of disorder of a configuration of points. We give certain formulas for its expectation for general stationary random point processes. For the random matrix β-sine processes on the real line ( β = 1,2,4), and Ginibre point process and zeros of Gaussian analytic functions process in the plane, we compute the expectation explicitly. Moreover, we prove that for these processes the variance of the renormalized energy vanishes, which shows concentration near the expected value. We also prove that the β = 2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels.
ERIC Educational Resources Information Center
Cotabish, Alicia; Dailey, Deborah; Hughes, Gail D.; Robinson, Ann
2011-01-01
In order to increase the quality and quantity of science instruction, elementary teachers must receive professional development in science learning processes. The current study was part of a larger randomized field study of teacher and student learning in science. In two districts in a southern state, researchers randomly assigned teacher…
ERIC Educational Resources Information Center
Chittleborough, Catherine R.; Nicholson, Alexandra L.; Basker, Elaine; Bell, Sarah; Campbell, Rona
2012-01-01
This article explores factors that may influence hand washing behaviour among pupils and staff in primary schools. A qualitative process evaluation within a cluster randomized controlled trial included pupil focus groups (n = 16, aged 6-11 years), semi-structured interviews (n = 16 teachers) and observations of hand washing facilities (n = 57).…
ERIC Educational Resources Information Center
Smith, Toni M.; Hjalmarson, Margret A.
2013-01-01
The purpose of this study is to examine prospective mathematics specialists' engagement in an instructional sequence designed to elicit and develop their understandings of random processes. The study was conducted with two different sections of a probability and statistics course for K-8 teachers. Thirty-two teachers participated. Video analyses…
Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial
Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.
2018-01-01
This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual art therapy. PTSD Checklist–Military Version and Beck Depression Inventory–II scores improved with treatment in both groups with no significant difference in improvement between the experimental and control groups. Art therapy in conjunction with CPT was found to improve trauma processing and veterans considered it to be an important part of their treatment as it provided healthy distancing, enhanced trauma recall, and increased access to emotions. PMID:29332989
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao
2016-07-15
We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilizedmore » interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.« less
On the design of henon and logistic map-based random number generator
NASA Astrophysics Data System (ADS)
Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah
2017-10-01
The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.
Conserved linear dynamics of single-molecule Brownian motion.
Serag, Maged F; Habuchi, Satoshi
2017-06-06
Macromolecular diffusion in homogeneous fluid at length scales greater than the size of the molecule is regarded as a random process. The mean-squared displacement (MSD) of molecules in this regime increases linearly with time. Here we show that non-random motion of DNA molecules in this regime that is undetectable by the MSD analysis can be quantified by characterizing the molecular motion relative to a latticed frame of reference. Our lattice occupancy analysis reveals unexpected sub-modes of motion of DNA that deviate from expected random motion in the linear, diffusive regime. We demonstrate that a subtle interplay between these sub-modes causes the overall diffusive motion of DNA to appear to conform to the linear regime. Our results show that apparently random motion of macromolecules could be governed by non-random dynamics that are detectable only by their relative motion. Our analytical approach should advance broad understanding of diffusion processes of fundamental relevance.
Conserved linear dynamics of single-molecule Brownian motion
Serag, Maged F.; Habuchi, Satoshi
2017-01-01
Macromolecular diffusion in homogeneous fluid at length scales greater than the size of the molecule is regarded as a random process. The mean-squared displacement (MSD) of molecules in this regime increases linearly with time. Here we show that non-random motion of DNA molecules in this regime that is undetectable by the MSD analysis can be quantified by characterizing the molecular motion relative to a latticed frame of reference. Our lattice occupancy analysis reveals unexpected sub-modes of motion of DNA that deviate from expected random motion in the linear, diffusive regime. We demonstrate that a subtle interplay between these sub-modes causes the overall diffusive motion of DNA to appear to conform to the linear regime. Our results show that apparently random motion of macromolecules could be governed by non-random dynamics that are detectable only by their relative motion. Our analytical approach should advance broad understanding of diffusion processes of fundamental relevance. PMID:28585925
Conserved linear dynamics of single-molecule Brownian motion
NASA Astrophysics Data System (ADS)
Serag, Maged F.; Habuchi, Satoshi
2017-06-01
Macromolecular diffusion in homogeneous fluid at length scales greater than the size of the molecule is regarded as a random process. The mean-squared displacement (MSD) of molecules in this regime increases linearly with time. Here we show that non-random motion of DNA molecules in this regime that is undetectable by the MSD analysis can be quantified by characterizing the molecular motion relative to a latticed frame of reference. Our lattice occupancy analysis reveals unexpected sub-modes of motion of DNA that deviate from expected random motion in the linear, diffusive regime. We demonstrate that a subtle interplay between these sub-modes causes the overall diffusive motion of DNA to appear to conform to the linear regime. Our results show that apparently random motion of macromolecules could be governed by non-random dynamics that are detectable only by their relative motion. Our analytical approach should advance broad understanding of diffusion processes of fundamental relevance.
A Statistical Method to Distinguish Functional Brain Networks
Fujita, André; Vidal, Maciel C.; Takahashi, Daniel Y.
2017-01-01
One major problem in neuroscience is the comparison of functional brain networks of different populations, e.g., distinguishing the networks of controls and patients. Traditional algorithms are based on search for isomorphism between networks, assuming that they are deterministic. However, biological networks present randomness that cannot be well modeled by those algorithms. For instance, functional brain networks of distinct subjects of the same population can be different due to individual characteristics. Moreover, networks of subjects from different populations can be generated through the same stochastic process. Thus, a better hypothesis is that networks are generated by random processes. In this case, subjects from the same group are samples from the same random process, whereas subjects from different groups are generated by distinct processes. Using this idea, we developed a statistical test called ANOGVA to test whether two or more populations of graphs are generated by the same random graph model. Our simulations' results demonstrate that we can precisely control the rate of false positives and that the test is powerful to discriminate random graphs generated by different models and parameters. The method also showed to be robust for unbalanced data. As an example, we applied ANOGVA to an fMRI dataset composed of controls and patients diagnosed with autism or Asperger. ANOGVA identified the cerebellar functional sub-network as statistically different between controls and autism (p < 0.001). PMID:28261045
A Statistical Method to Distinguish Functional Brain Networks.
Fujita, André; Vidal, Maciel C; Takahashi, Daniel Y
2017-01-01
One major problem in neuroscience is the comparison of functional brain networks of different populations, e.g., distinguishing the networks of controls and patients. Traditional algorithms are based on search for isomorphism between networks, assuming that they are deterministic. However, biological networks present randomness that cannot be well modeled by those algorithms. For instance, functional brain networks of distinct subjects of the same population can be different due to individual characteristics. Moreover, networks of subjects from different populations can be generated through the same stochastic process. Thus, a better hypothesis is that networks are generated by random processes. In this case, subjects from the same group are samples from the same random process, whereas subjects from different groups are generated by distinct processes. Using this idea, we developed a statistical test called ANOGVA to test whether two or more populations of graphs are generated by the same random graph model. Our simulations' results demonstrate that we can precisely control the rate of false positives and that the test is powerful to discriminate random graphs generated by different models and parameters. The method also showed to be robust for unbalanced data. As an example, we applied ANOGVA to an fMRI dataset composed of controls and patients diagnosed with autism or Asperger. ANOGVA identified the cerebellar functional sub-network as statistically different between controls and autism ( p < 0.001).
Speech Recognition as a Transcription Aid: A Randomized Comparison With Standard Transcription
Mohr, David N.; Turner, David W.; Pond, Gregory R.; Kamath, Joseph S.; De Vos, Cathy B.; Carpenter, Paul C.
2003-01-01
Objective. Speech recognition promises to reduce information entry costs for clinical information systems. It is most likely to be accepted across an organization if physicians can dictate without concerning themselves with real-time recognition and editing; assistants can then edit and process the computer-generated document. Our objective was to evaluate the use of speech-recognition technology in a randomized controlled trial using our institutional infrastructure. Design. Clinical note dictations from physicians in two specialty divisions were randomized to either a standard transcription process or a speech-recognition process. Secretaries and transcriptionists also were assigned randomly to each of these processes. Measurements. The duration of each dictation was measured. The amount of time spent processing a dictation to yield a finished document also was measured. Secretarial and transcriptionist productivity, defined as hours of secretary work per minute of dictation processed, was determined for speech recognition and standard transcription. Results. Secretaries in the endocrinology division were 87.3% (confidence interval, 83.3%, 92.3%) as productive with the speech-recognition technology as implemented in this study as they were using standard transcription. Psychiatry transcriptionists and secretaries were similarly less productive. Author, secretary, and type of clinical note were significant (p < 0.05) predictors of productivity. Conclusion. When implemented in an organization with an existing document-processing infrastructure (which included training and interfaces of the speech-recognition editor with the existing document entry application), speech recognition did not improve the productivity of secretaries or transcriptionists. PMID:12509359
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
Stochastic resetting in backtrack recovery by RNA polymerases
NASA Astrophysics Data System (ADS)
Roldán, Édgar; Lisica, Ana; Sánchez-Taltavull, Daniel; Grill, Stephan W.
2016-06-01
Transcription is a key process in gene expression, in which RNA polymerases produce a complementary RNA copy from a DNA template. RNA polymerization is frequently interrupted by backtracking, a process in which polymerases perform a random walk along the DNA template. Recovery of polymerases from the transcriptionally inactive backtracked state is determined by a kinetic competition between one-dimensional diffusion and RNA cleavage. Here we describe backtrack recovery as a continuous-time random walk, where the time for a polymerase to recover from a backtrack of a given depth is described as a first-passage time of a random walker to reach an absorbing state. We represent RNA cleavage as a stochastic resetting process and derive exact expressions for the recovery time distributions and mean recovery times from a given initial backtrack depth for both continuous and discrete-lattice descriptions of the random walk. We show that recovery time statistics do not depend on the discreteness of the DNA lattice when the rate of one-dimensional diffusion is large compared to the rate of cleavage.
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
Adaptive threshold shearlet transform for surface microseismic data denoising
NASA Astrophysics Data System (ADS)
Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan
2018-06-01
Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.
Price, John M.; Colflesh, Gregory J. H.; Cerella, John; Verhaeghen, Paul
2014-01-01
We investigated the effects of 10 hours of practice on variations of the N-Back task to investigate the processes underlying possible expansion of the focus of attention within working memory. Using subtractive logic, we showed that random access (i.e., Sternberg-like search) yielded a modest effect (a 50% increase in speed) whereas the processes of forward access (i.e., retrieval in order, as in a standard N-Back task) and updating (i.e., changing the contents of working memory) were executed about 5 times faster after extended practice. We additionally found that extended practice increased working memory capacity as measured by the size of the focus of attention for the forward-access task, but not for variations where probing was in random order. This suggests that working memory capacity may depend on the type of search process engaged, and that certain working-memory-related cognitive processes are more amenable to practice than others. PMID:24486803
Coupled continuous time-random walks in quenched random environment
NASA Astrophysics Data System (ADS)
Magdziarz, M.; Szczotka, W.
2018-02-01
We introduce a coupled continuous-time random walk with coupling which is characteristic for Lévy walks. Additionally we assume that the walker moves in a quenched random environment, i.e. the site disorder at each lattice point is fixed in time. We analyze the scaling limit of such a random walk. We show that for large times the behaviour of the analyzed process is exactly the same as in the case of uncoupled quenched trap model for Lévy flights.
Anhøj, Jacob; Olesen, Anne Vingaard
2014-01-01
A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.
Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension
NASA Astrophysics Data System (ADS)
Yan, Z.; Luan, X.
2017-12-01
Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.
Certified randomness in quantum physics.
Acín, Antonio; Masanes, Lluis
2016-12-07
The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.
Fast approach for toner saving
NASA Astrophysics Data System (ADS)
Safonov, Ilia V.; Kurilin, Ilya V.; Rychagov, Michael N.; Lee, Hokeun; Kim, Sangho; Choi, Donchul
2011-01-01
Reducing toner consumption is an important task in modern printing devices and has a significant positive ecological impact. Existing toner saving approaches have two main drawbacks: appearance of hardcopy in toner saving mode is worse in comparison with normal mode; processing of whole rendered page bitmap requires significant computational costs. We propose to add small holes of various shapes and sizes to random places inside a character bitmap stored in font cache. Such random perforation scheme is based on processing pipeline in RIP of standard printer languages Postscript and PCL. Processing of text characters only, and moreover, processing of each character for given font and size alone, is an extremely fast procedure. The approach does not deteriorate halftoned bitmap and business graphics and provide toner saving for typical office documents up to 15-20%. Rate of toner saving is adjustable. Alteration of resulted characters' appearance is almost indistinguishable in comparison with solid black text due to random placement of small holes inside the character regions. The suggested method automatically skips small fonts to preserve its quality. Readability of text processed by proposed method is fine. OCR programs process that scanned hardcopy successfully too.
ERIC Educational Resources Information Center
Parlar, Mahmut
2004-01-01
Brownian motion is an important stochastic process used in modelling the random evolution of stock prices. In their 1973 seminal paper--which led to the awarding of the 1997 Nobel prize in Economic Sciences--Fischer Black and Myron Scholes assumed that the random stock price process is described (i.e., generated) by Brownian motion. Despite its…
Deposition on disordered substrates with precursor layer diffusion
NASA Astrophysics Data System (ADS)
Filipe, J. A. N.; Rodgers, G. J.; Tavassoli, Z.
1998-09-01
Recently we introduced a one-dimensional accelerated random sequential adsorption process as a model for chemisorption with precursor layer diffusion. In this paper we consider this deposition process on disordered or impure substrates. The problem is solved exactly on both the lattice and continuum and for various impurity distributions. The results are compared with those from the standard random sequential adsorption model.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Heo, Moonseong; Meissner, Paul; Litwin, Alain H; Arnsten, Julia H; McKee, M Diane; Karasz, Alison; McKinley, Paula; Rehm, Colin D; Chambers, Earle C; Yeh, Ming-Chin; Wylie-Rosett, Judith
2017-01-01
Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants' preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants' preference after randomization, which we call a "preference option randomized design (PORD)". In contrast to other preference designs, which ask whether or not participants consent to the assigned intervention after randomization, the crucial feature of preference option randomized design is its unique informed consent process before randomization. Specifically, the preference option randomized design consent process informs participants that they can opt out and switch to the other intervention only if after randomization they actively express the desire to do so. Participants who do not independently express explicit alternate preference or assent to the randomly assigned intervention are considered to not have an alternate preference. In sum, preference option randomized design intends to maximize retention, minimize possibility of forced assignment for any participants, and to maintain randomization by allowing participants with no or equal preference to represent random assignments. This design scheme enables to define five effects that are interconnected with each other through common design parameters-comparative, preference, selection, intent-to-treat, and overall/as-treated-to collectively guide decision making between interventions. Statistical power functions for testing all these effects are derived, and simulations verified the validity of the power functions under normal and binomial distributions.
On Digital Simulation of Multicorrelated Random Processes and Its Applications. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Sinha, A. K.
1973-01-01
Two methods are described to simulate, on a digital computer, a set of correlated, stationary, and Gaussian time series with zero mean from the given matrix of power spectral densities and cross spectral densities. The first method is based upon trigonometric series with random amplitudes and deterministic phase angles. The random amplitudes are generated by using a standard random number generator subroutine. An example is given which corresponds to three components of wind velocities at two different spatial locations for a total of six correlated time series. In the second method, the whole process is carried out using the Fast Fourier Transform approach. This method gives more accurate results and works about twenty times faster for a set of six correlated time series.
Simulation of diffuse-charge capacitance in electric double layer capacitors
NASA Astrophysics Data System (ADS)
Sun, Ning; Gersappe, Dilip
2017-01-01
We use a Lattice Boltzmann Model (LBM) in order to simulate diffuse-charge dynamics in Electric Double Layer Capacitors (EDLCs). Simulations are carried out for both the charge and the discharge processes on 2D systems of complex random electrode geometries (pure random, random spheres and random fibers). The steric effect of concentrated solutions is considered by using a Modified Poisson-Nernst-Planck (MPNP) equations and compared with regular Poisson-Nernst-Planck (PNP) systems. The effects of electrode microstructures (electrode density, electrode filler morphology, filler size, etc.) on the net charge distribution and charge/discharge time are studied in detail. The influence of applied potential during discharging process is also discussed. Our studies show how electrode morphology can be used to tailor the properties of supercapacitors.
Random mechanics: Nonlinear vibrations, turbulences, seisms, swells, fatigue
NASA Astrophysics Data System (ADS)
Kree, P.; Soize, C.
The random modeling of physical phenomena, together with probabilistic methods for the numerical calculation of random mechanical forces, are analytically explored. Attention is given to theoretical examinations such as probabilistic concepts, linear filtering techniques, and trajectory statistics. Applications of the methods to structures experiencing atmospheric turbulence, the quantification of turbulence, and the dynamic responses of the structures are considered. A probabilistic approach is taken to study the effects of earthquakes on structures and to the forces exerted by ocean waves on marine structures. Theoretical analyses by means of vector spaces and stochastic modeling are reviewed, as are Markovian formulations of Gaussian processes and the definition of stochastic differential equations. Finally, random vibrations with a variable number of links and linear oscillators undergoing the square of Gaussian processes are investigated.
Relatively Random: Context Effects on Perceived Randomness and Predicted Outcomes
ERIC Educational Resources Information Center
Matthews, William J.
2013-01-01
This article concerns the effect of context on people's judgments about sequences of chance outcomes. In Experiment 1, participants judged whether sequences were produced by random, mechanical processes (such as a roulette wheel) or skilled human action (such as basketball shots). Sequences with lower alternation rates were judged more likely to…
Multiple filters affect tree species assembly in mid-latitude forest communities.
Kubota, Y; Kusumoto, B; Shiono, T; Ulrich, W
2018-05-01
Species assembly patterns of local communities are shaped by the balance between multiple abiotic/biotic filters and dispersal that both select individuals from species pools at the regional scale. Knowledge regarding functional assembly can provide insight into the relative importance of the deterministic and stochastic processes that shape species assembly. We evaluated the hierarchical roles of the α niche and β niches by analyzing the influence of environmental filtering relative to functional traits on geographical patterns of tree species assembly in mid-latitude forests. Using forest plot datasets, we examined the α niche traits (leaf and wood traits) and β niche properties (cold/drought tolerance) of tree species, and tested non-randomness (clustering/over-dispersion) of trait assembly based on null models that assumed two types of species pools related to biogeographical regions. For most plots, species assembly patterns fell within the range of random expectation. However, particularly for cold/drought tolerance-related β niche properties, deviation from randomness was frequently found; non-random clustering was predominant in higher latitudes with harsh climates. Our findings demonstrate that both randomness and non-randomness in trait assembly emerged as a result of the α and β niches, although we suggest the potential role of dispersal processes and/or species equalization through trait similarities in generating the prevalence of randomness. Clustering of β niche traits along latitudinal climatic gradients provides clear evidence of species sorting by filtering particular traits. Our results reveal that multiple filters through functional niches and stochastic processes jointly shape geographical patterns of species assembly across mid-latitude forests.
Ji, Peter; DuBois, David L; Flay, Brian R; Brechling, Vanessa
2008-03-01
Recruiting schools into a matched-pair randomized control trial (MP-RCT) to evaluate the efficacy of a school-level prevention program presents challenges for researchers. We considered which of 2 procedures would be most effective for recruiting schools into the study and assigning them to conditions. In 1 procedure (recruit and match/randomize), we would recruit schools and match them prior to randomization, and in the other (match/randomize and recruitment), we would match schools and randomize them prior to recruitment. We considered how each procedure impacted the randomization process and our ability to recruit schools into the study. After implementing the selected procedure, the equivalence of both treatment and control group schools and the participating and nonparticipating schools on school demographic variables was evaluated. We decided on the recruit and match/randomize procedure because we thought it would provide the opportunity to build rapport with the schools and prepare them for the randomization process, thereby increasing the likelihood that they would accept their randomly assigned conditions. Neither the treatment and control group schools nor the participating and nonparticipating schools exhibited statistically significant differences from each other on any of the school demographic variables. Recruitment of schools prior to matching and randomization in an MP-RCT may facilitate the recruitment of schools and thus enhance both the statistical power and the representativeness of study findings. Future research would benefit from the consideration of a broader range of variables (eg, readiness to implement a comprehensive prevention program) both in matching schools and in evaluating their representativeness to nonparticipating schools.
Garvin-Doxas, Kathy
2008-01-01
While researching student assumptions for the development of the Biology Concept Inventory (BCI; http://bioliteracy.net), we found that a wide class of student difficulties in molecular and evolutionary biology appears to be based on deep-seated, and often unaddressed, misconceptions about random processes. Data were based on more than 500 open-ended (primarily) college student responses, submitted online and analyzed through our Ed's Tools system, together with 28 thematic and think-aloud interviews with students, and the responses of students in introductory and advanced courses to questions on the BCI. Students believe that random processes are inefficient, whereas biological systems are very efficient. They are therefore quick to propose their own rational explanations for various processes, from diffusion to evolution. These rational explanations almost always make recourse to a driver, e.g., natural selection in evolution or concentration gradients in molecular biology, with the process taking place only when the driver is present, and ceasing when the driver is absent. For example, most students believe that diffusion only takes place when there is a concentration gradient, and that the mutational processes that change organisms occur only in response to natural selection pressures. An understanding that random processes take place all the time and can give rise to complex and often counterintuitive behaviors is almost totally absent. Even students who have had advanced or college physics, and can discuss diffusion correctly in that context, cannot make the transfer to biological processes, and passing through multiple conventional biology courses appears to have little effect on their underlying beliefs. PMID:18519614
Random walks with random velocities.
Zaburdaev, Vasily; Schmiedeberg, Michael; Stark, Holger
2008-07-01
We consider a random walk model that takes into account the velocity distribution of random walkers. Random motion with alternating velocities is inherent to various physical and biological systems. Moreover, the velocity distribution is often the first characteristic that is experimentally accessible. Here, we derive transport equations describing the dispersal process in the model and solve them analytically. The asymptotic properties of solutions are presented in the form of a phase diagram that shows all possible scaling regimes, including superdiffusive, ballistic, and superballistic motion. The theoretical results of this work are in excellent agreement with accompanying numerical simulations.
Boone, Kelly M; Gracious, Barbara; Klebanoff, Mark A; Rogers, Lynette K; Rausch, Joseph; Coury, Daniel L; Keim, Sarah A
2017-12-01
Despite advances in the health and long-term survival of infants born preterm, they continue to face developmental challenges including higher risk for autism spectrum disorder (ASD) and atypical sensory processing patterns. This secondary analysis aimed to describe sensory profiles and explore effects of combined dietary docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), and gamma-linolenic acid (GLA) supplementation on parent-reported sensory processing in toddlers born preterm who were exhibiting ASD symptoms. 90-day randomized, double blinded, placebo-controlled trial. 31 children aged 18-38months who were born at ≤29weeks' gestation. Mixed effects regression analyses followed intent to treat and explored effects on parent-reported sensory processing measured by the Infant/Toddler Sensory Profile (ITSP). Baseline ITSP scores reflected atypical sensory processing, with the majority of atypical scores falling below the mean. Sensory processing sections: auditory (above=0%, below=65%), vestibular (above=13%, below=48%), tactile (above=3%, below=35%), oral sensory (above=10%; below=26%), visual (above=10%, below=16%); sensory processing quadrants: low registration (above=3%; below=71%), sensation avoiding (above=3%; below=39%), sensory sensitivity (above=3%; below=35%), and sensation seeking (above=10%; below=19%). Twenty-eight of 31 children randomized had complete outcome data. Although not statistically significant (p=0.13), the magnitude of the effect for reduction in behaviors associated with sensory sensitivity was medium to large (effect size=0.57). No other scales reflected a similar magnitude of effect size (range: 0.10 to 0.32). The findings provide support for larger randomized trials of omega fatty acid supplementation for children at risk of sensory processing difficulties, especially those born preterm. Copyright © 2017 Elsevier B.V. All rights reserved.
The informational architecture of the cell.
Walker, Sara Imari; Kim, Hyunju; Davies, Paul C W
2016-03-13
We compare the informational architecture of biological and random networks to identify informational features that may distinguish biological networks from random. The study presented here focuses on the Boolean network model for regulation of the cell cycle of the fission yeast Schizosaccharomyces pombe. We compare calculated values of local and global information measures for the fission yeast cell cycle to the same measures as applied to two different classes of random networks: Erdös-Rényi and scale-free. We report patterns in local information processing and storage that do indeed distinguish biological from random, associated with control nodes that regulate the function of the fission yeast cell-cycle network. Conversely, we find that integrated information, which serves as a global measure of 'emergent' information processing, does not differ from random for the case presented. We discuss implications for our understanding of the informational architecture of the fission yeast cell-cycle network in particular, and more generally for illuminating any distinctive physics that may be operative in life. © 2016 The Author(s).
Persistent random walk of cells involving anomalous effects and random death
NASA Astrophysics Data System (ADS)
Fedotov, Sergei; Tan, Abby; Zubarev, Andrey
2015-04-01
The purpose of this paper is to implement a random death process into a persistent random walk model which produces sub-ballistic superdiffusion (Lévy walk). We develop a stochastic two-velocity jump model of cell motility for which the switching rate depends upon the time which the cell has spent moving in one direction. It is assumed that the switching rate is a decreasing function of residence (running) time. This assumption leads to the power law for the velocity switching time distribution. This describes the anomalous persistence of cell motility: the longer the cell moves in one direction, the smaller the switching probability to another direction becomes. We derive master equations for the cell densities with the generalized switching terms involving the tempered fractional material derivatives. We show that the random death of cells has an important implication for the transport process through tempering of the superdiffusive process. In the long-time limit we write stationary master equations in terms of exponentially truncated fractional derivatives in which the rate of death plays the role of tempering of a Lévy jump distribution. We find the upper and lower bounds for the stationary profiles corresponding to the ballistic transport and diffusion with the death-rate-dependent diffusion coefficient. Monte Carlo simulations confirm these bounds.
Supernatural believers attribute more intentions to random movement than skeptics: an fMRI study.
Riekki, Tapani; Lindeman, Marjaana; Raij, Tuukka T
2014-01-01
A host of research has attempted to explain why some believe in the supernatural and some do not. One suggested explanation for commonly held supernatural beliefs is that they are a by-product of theory of mind (ToM) processing. However, this does not explain why skeptics with intact ToM processes do not believe. We employed fMRI to investigate activation differences in ToM-related brain circuitries between supernatural believers (N = 12) and skeptics (N = 11) while they watched 2D animations of geometric objects moving intentionally or randomly and rated the intentionality of the animations. The ToM-related circuitries in the medial prefrontal cortex (mPFC) were localized by contrasting intention-rating-related and control-rating-related brain activation. Compared with the skeptics, the supernatural believers rated the random movements as more intentional and had stronger activation of the ToM-related circuitries during the animation with random movement. The strength of the ToM-related activation covaried with the intentionality ratings. These findings provide evidence that differences in ToM-related activations are associated with supernatural believers' tendency to interpret random phenomena in mental terms. Thus, differences in ToM processing may contribute to differences between believing and unbelieving.
A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges
Wang, Xu; Sun, Baitao
2014-01-01
Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347
NASA Astrophysics Data System (ADS)
El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.
2015-10-01
The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.
Perception of Randomness: On the Time of Streaks
ERIC Educational Resources Information Center
Sun, Yanlong; Wang, Hongbin
2010-01-01
People tend to think that streaks in random sequential events are rare and remarkable. When they actually encounter streaks, they tend to consider the underlying process as non-random. The present paper examines the time of pattern occurrences in sequences of Bernoulli trials, and shows that among all patterns of the same length, a streak is the…
Why Are People Bad at Detecting Randomness? A Statistical Argument
ERIC Educational Resources Information Center
Williams, Joseph J.; Griffiths, Thomas L.
2013-01-01
Errors in detecting randomness are often explained in terms of biases and misconceptions. We propose and provide evidence for an account that characterizes the contribution of the inherent statistical difficulty of the task. Our account is based on a Bayesian statistical analysis, focusing on the fact that a random process is a special case of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yi; Chen, Wei; Xu, Hongyi
To provide a seamless integration of manufacturing processing simulation and fiber microstructure modeling, two new stochastic 3D microstructure reconstruction methods are proposed for two types of random fiber composites: random short fiber composites, and Sheet Molding Compounds (SMC) chopped fiber composites. A Random Sequential Adsorption (RSA) algorithm is first developed to embed statistical orientation information into 3D RVE reconstruction of random short fiber composites. For the SMC composites, an optimized Voronoi diagram based approach is developed for capturing the substructure features of SMC chopped fiber composites. The proposed methods are distinguished from other reconstruction works by providing a way ofmore » integrating statistical information (fiber orientation tensor) obtained from material processing simulation, as well as capturing the multiscale substructures of the SMC composites.« less
Zaharov, V V; Farahi, R H; Snyder, P J; Davison, B H; Passian, A
2014-11-21
Resolving weak spectral variations in the dynamic response of materials that are either dominated or excited by stochastic processes remains a challenge. Responses that are thermal in origin are particularly relevant examples due to the delocalized nature of heat. Despite its inherent properties in dealing with stochastic processes, the Karhunen-Loève expansion has not been fully exploited in measurement of systems that are driven solely by random forces or can exhibit large thermally driven random fluctuations. Here, we present experimental results and analysis of the archetypes (a) the resonant excitation and transient response of an atomic force microscope probe by the ambient random fluctuations and nanoscale photothermal sample response, and (b) the photothermally scattered photons in pump-probe spectroscopy. In each case, the dynamic process is represented as an infinite series with random coefficients to obtain pertinent frequency shifts and spectral peaks and demonstrate spectral enhancement for a set of compounds including the spectrally complex biomass. The considered cases find important applications in nanoscale material characterization, biosensing, and spectral identification of biological and chemical agents.
Real-time fast physical random number generator with a photonic integrated circuit.
Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu
2017-03-20
Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Price, John M; Colflesh, Gregory J H; Cerella, John; Verhaeghen, Paul
2014-05-01
We investigated the effects of 10h of practice on variations of the N-Back task to investigate the processes underlying possible expansion of the focus of attention within working memory. Using subtractive logic, we showed that random access (i.e., Sternberg-like search) yielded a modest effect (a 50% increase in speed) whereas the processes of forward access (i.e., retrieval in order, as in a standard N-Back task) and updating (i.e., changing the contents of working memory) were executed about 5 times faster after extended practice. We additionally found that extended practice increased working memory capacity as measured by the size of the focus of attention for the forward-access task, but not for variations where probing was in random order. This suggests that working memory capacity may depend on the type of search process engaged, and that certain working-memory-related cognitive processes are more amenable to practice than others. Copyright © 2014 Elsevier B.V. All rights reserved.
The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions
NASA Astrophysics Data System (ADS)
Tzioufas, Achillefs
2018-04-01
We consider the cardinality of supercritical oriented bond percolation in two dimensions. We show that, whenever the the origin is conditioned to percolate, the process appropriately normalized converges asymptotically in distribution to the standard normal law. This resolves a longstanding open problem pointed out to in several instances in the literature. The result applies also to the continuous-time analog of the process, viz. the basic one-dimensional contact process. We also derive general random-indices central limit theorems for associated random variables as byproducts of our proof.
The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions
NASA Astrophysics Data System (ADS)
Tzioufas, Achillefs
2018-06-01
We consider the cardinality of supercritical oriented bond percolation in two dimensions. We show that, whenever the the origin is conditioned to percolate, the process appropriately normalized converges asymptotically in distribution to the standard normal law. This resolves a longstanding open problem pointed out to in several instances in the literature. The result applies also to the continuous-time analog of the process, viz. the basic one-dimensional contact process. We also derive general random-indices central limit theorems for associated random variables as byproducts of our proof.
Recommendations and illustrations for the evaluation of photonic random number generators
NASA Astrophysics Data System (ADS)
Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi
2017-09-01
The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.
Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process
NASA Astrophysics Data System (ADS)
Salvi, Michele; Simenhaus, François
2018-05-01
We consider a random walk in dimension d≥ 1 in a dynamic random environment evolving as an interchange process with rate γ >0. We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.
Theoretical consideration of the energy resolution in planar HPGe detectors for low energy X-rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samedov, Victor V.
In this work, theoretical consideration of the processes in planar High Purity Ge (HPGe) detectors for low energy X-rays using the random stochastic processes formalism was carried out. Using the random stochastic processes formalism, the generating function of the processes of X-rays registration in a planar HPGe detector was derived. The power serial expansions of the detector amplitude and the variance in terms of the inverse bias voltage were derived. The coefficients of these expansions allow determining the Fano factor, electron mobility lifetime product, nonuniformity of the trap density, and other characteristics of the semiconductor material. (authors)
Hancock, Laura M; Bruce, Jared M; Bruce, Amanda S; Lynch, Sharon G
2015-01-01
Between 40-65% of multiple sclerosis patients experience cognitive deficits, with processing speed and working memory most commonly affected. This pilot study investigated the effect of computerized cognitive training focused on improving processing speed and working memory. Participants were randomized into either an active or a sham training group and engaged in six weeks of training. The active training group improved on a measure of processing speed and attention following cognitive training, and data trended toward significance on measures of other domains. Results provide preliminary evidence that cognitive training with multiple sclerosis patients may produce moderate improvement in select areas of cognitive functioning.
Wilson, Lorna R M; Hopcraft, Keith I
2017-12-01
The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.
NASA Astrophysics Data System (ADS)
Wilson, Lorna R. M.; Hopcraft, Keith I.
2017-12-01
The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.
ERIC Educational Resources Information Center
Zoblotsky, Todd; Ransford-Kaldon, Carolyn; Morrison, Donald M.
2011-01-01
The present paper describes the recruitment and site selection process that has been underway since January 2011, with particular emphasis on the use of Mahalanobis distance score to determine matched pairs of sites prior to randomization to treatment and control groups. Through a systematic winnowing process, the authors found that they could…
Saxton, Michael J
2007-01-01
Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.
The Miniaturization of the AFIT Random Noise Radar
2013-03-01
RANDOM NOISE RADAR I. Introduction Recent advances in technology and signal processing techniques have opened thedoor to using an ultra-wide band random...AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training
Random sequences generation through optical measurements by phase-shifting interferometry
NASA Astrophysics Data System (ADS)
François, M.; Grosges, T.; Barchiesi, D.; Erra, R.; Cornet, A.
2012-04-01
The development of new techniques for producing random sequences with a high level of security is a challenging topic of research in modern cryptographics. The proposed method is based on the measurement by phase-shifting interferometry of the speckle signals of the interaction between light and structures. We show how the combination of amplitude and phase distributions (maps) under a numerical process can produce random sequences. The produced sequences satisfy all the statistical requirements of randomness and can be used in cryptographic schemes.
Influence of the random walk finite step on the first-passage probability
NASA Astrophysics Data System (ADS)
Klimenkova, Olga; Menshutin, Anton; Shchur, Lev
2018-01-01
A well known connection between first-passage probability of random walk and distribution of electrical potential described by Laplace equation is studied. We simulate random walk in the plane numerically as a discrete time process with fixed step length. We measure first-passage probability to touch the absorbing sphere of radius R in 2D. We found a regular deviation of the first-passage probability from the exact function, which we attribute to the finiteness of the random walk step.
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Kenny; Najm, Habib N.
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-04-13
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Kantrowitz, Joshua T; Sharif, Zafar; Medalia, Alice; Keefe, Richard S E; Harvey, Philip; Bruder, Gerard; Barch, Deanna M; Choo, Tse; Lee, Seonjoo; Lieberman, Jeffrey A
2016-06-01
Small-scale studies of auditory processing cognitive remediation programs have demonstrated efficacy in schizophrenia. We describe a multicenter, rater-blinded, randomized, controlled study of auditory-focused cognitive remediation, conducted from June 24, 2010, to June 14, 2013, and approved by the local institutional review board at all sites. Prior to randomization, participants with schizophrenia (DSM-IV-TR) were stabilized on a standardized antipsychotic regimen (lurasidone [40-160 mg/d]), followed by randomization to adjunctive cognitive remediation: auditory focused (Brain Fitness) versus control (nonspecific video games), administered 1-2 times weekly for 30 sessions. Coprimary outcome measures included MATRICS Consensus Cognitive Battery (MCCB) and the University of California, San Diego, Performance-Based Skills Assessment-Brief scale. 120 participants were randomized and completed at least 1 auditory-focused cognitive remediation (n = 56) or video game control session (n = 64). 74 participants completed ≥ 25 sessions and postrandomization assessments. At study completion, the change from prestabilization was statistically significant for MCCB composite score (d = 0.42, P < .0001) across groups. Participants randomized to auditory-focused cognitive remediation had a trend-level higher mean MCCB composite score compared to participants randomized to control cognitive remediation (P = .08). After controlling for scores at the time of randomization, there were no significant between-treatment group differences at study completion. Auditory processing cognitive remediation combined with lurasidone did not lead to differential improvement over nonspecific video games. Across-group improvement from prestabilization baseline to study completion was observed, but since all participants were receiving lurasidone open label, it is difficult to interpret the source of these effects. Future studies comparing both pharmacologic and behavioral cognitive enhancers should consider a 2 × 2 design, using a control for both the medication and the cognitive remediation. ClinicalTrials.gov identifier: NCT01173874. © Copyright 2016 Physicians Postgraduate Press, Inc.
Concurrent design of quasi-random photonic nanostructures
Lee, Won-Kyu; Yu, Shuangcheng; Engel, Clifford J.; Reese, Thaddeus; Rhee, Dongjoon; Chen, Wei
2017-01-01
Nanostructured surfaces with quasi-random geometries can manipulate light over broadband wavelengths and wide ranges of angles. Optimization and realization of stochastic patterns have typically relied on serial, direct-write fabrication methods combined with real-space design. However, this approach is not suitable for customizable features or scalable nanomanufacturing. Moreover, trial-and-error processing cannot guarantee fabrication feasibility because processing–structure relations are not included in conventional designs. Here, we report wrinkle lithography integrated with concurrent design to produce quasi-random nanostructures in amorphous silicon at wafer scales that achieved over 160% light absorption enhancement from 800 to 1,200 nm. The quasi-periodicity of patterns, materials filling ratio, and feature depths could be independently controlled. We statistically represented the quasi-random patterns by Fourier spectral density functions (SDFs) that could bridge the processing–structure and structure–performance relations. Iterative search of the optimal structure via the SDF representation enabled concurrent design of nanostructures and processing. PMID:28760975
Continuous-Time Random Walk with multi-step memory: an application to market dynamics
NASA Astrophysics Data System (ADS)
Gubiec, Tomasz; Kutner, Ryszard
2017-11-01
An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
1980-11-26
and J.B. Thomas, "The Effect of a Memoryless Nonlinearity on the Spectrum of a Random Process," IEEE Transactions on Information Theory, Vol. IT-23, pp...Density Function from Measurements Corrupted by Poisson Noise," IEEE Transactions on Information Theory, Vol. IT-23, pp. 764-766, November 1977. H. Derin...pp. 243-249, December 1977. G.L. Wise and N.C. Gallagher, "On Spherically Invariant Random Processes," IEEE Transactions on Information Theory, Vol. IT
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
NASA Technical Reports Server (NTRS)
Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)
2002-01-01
Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.
Processing of task-irrelevant emotional faces impacted by implicit sequence learning.
Peng, Ming; Cai, Mengfei; Zhou, Renlai
2015-12-02
Attentional load may be increased by task-relevant attention, such as difficulty of task, or task-irrelevant attention, such as an unexpected light-spot in the screen. Several studies have focused on the influence of task-relevant attentional load on task-irrelevant emotion processing. In this study, we used event-related potentials to examine the impact of task-irrelevant attentional load on task-irrelevant expression processing. Eighteen participants identified the color of a word (i.e. the color Stroop task) while a picture of a fearful or a neutral face was shown in the background. The task-irrelevant attentional load was increased by regularly presented congruence trials (congruence between the color and the meaning of the word) in the regular condition because implicit sequence learning was induced. We compared the task-irrelevant expression processing between the regular condition and the random condition (the congruence and incongruence trials were presented randomly). Behaviorally, reaction times for the fearful face condition were faster than the neutral faces condition in the random condition, whereas no significant difference was found in the regular condition. The event-related potential results indicated enhanced positive amplitudes in P2, N2, and P3 components relative to neutral faces in the random condition. In comparison, only P2 differed significantly for the two types of expressions in the regular condition. The study showed that attentional load increased by implicit sequence learning influenced the late processing of task-irrelevant expression.
AVE-SESAME program for the REEDA System
NASA Technical Reports Server (NTRS)
Hickey, J. S.
1981-01-01
The REEDA system software was modified and improved to process the AVE-SESAME severe storm data. A random access file system for the AVE storm data was designed, tested, and implemented. The AVE/SESAME software was modified to incorporate the random access file input and to interface with new graphics hardware/software now available on the REEDA system. Software was developed to graphically display the AVE/SESAME data in the convention normally used by severe storm researchers. Software was converted to AVE/SESAME software systems and interfaced with existing graphics hardware/software available on the REEDA System. Software documentation was provided for existing AVE/SESAME programs underlining functional flow charts and interacting questions. All AVE/SESAME data sets in random access format was processed to allow developed software to access the entire AVE/SESAME data base. The existing software was modified to allow for processing of different AVE/SESAME data set types including satellite surface and radar data.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Network Dynamics of Innovation Processes.
Iacopini, Iacopo; Milojević, Staša; Latora, Vito
2018-01-26
We introduce a model for the emergence of innovations, in which cognitive processes are described as random walks on the network of links among ideas or concepts, and an innovation corresponds to the first visit of a node. The transition matrix of the random walk depends on the network weights, while in turn the weight of an edge is reinforced by the passage of a walker. The presence of the network naturally accounts for the mechanism of the "adjacent possible," and the model reproduces both the rate at which novelties emerge and the correlations among them observed empirically. We show this by using synthetic networks and by studying real data sets on the growth of knowledge in different scientific disciplines. Edge-reinforced random walks on complex topologies offer a new modeling framework for the dynamics of correlated novelties and are another example of coevolution of processes and networks.
Key management of the double random-phase-encoding method using public-key encryption
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2010-03-01
Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.
Network Dynamics of Innovation Processes
NASA Astrophysics Data System (ADS)
Iacopini, Iacopo; Milojević, Staša; Latora, Vito
2018-01-01
We introduce a model for the emergence of innovations, in which cognitive processes are described as random walks on the network of links among ideas or concepts, and an innovation corresponds to the first visit of a node. The transition matrix of the random walk depends on the network weights, while in turn the weight of an edge is reinforced by the passage of a walker. The presence of the network naturally accounts for the mechanism of the "adjacent possible," and the model reproduces both the rate at which novelties emerge and the correlations among them observed empirically. We show this by using synthetic networks and by studying real data sets on the growth of knowledge in different scientific disciplines. Edge-reinforced random walks on complex topologies offer a new modeling framework for the dynamics of correlated novelties and are another example of coevolution of processes and networks.
NASA Astrophysics Data System (ADS)
Yang, Jyun-Bao; Chang, Ting-Chang; Huang, Jheng-Jie; Chen, Yu-Chun; Chen, Yu-Ting; Tseng, Hsueh-Chih; Chu, Ann-Kuo; Sze, Simon M.
2014-04-01
In this study, indium-gallium-zinc-oxide thin film transistors can be operated either as transistors or resistance random access memory devices. Before the forming process, current-voltage curve transfer characteristics are observed, and resistance switching characteristics are measured after a forming process. These resistance switching characteristics exhibit two behaviors, and are dominated by different mechanisms. The mode 1 resistance switching behavior is due to oxygen vacancies, while mode 2 is dominated by the formation of an oxygen-rich layer. Furthermore, an easy approach is proposed to reduce power consumption when using these resistance random access memory devices with the amorphous indium-gallium-zinc-oxide thin film transistor.
Daniels, Marcus G; Farmer, J Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-14
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
NASA Astrophysics Data System (ADS)
Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-01
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
Random Versus Nonrandom Peer Review: A Case for More Meaningful Peer Review.
Itri, Jason N; Donithan, Adam; Patel, Sohil H
2018-05-10
Random peer review programs are not optimized to discover cases with diagnostic error and thus have inherent limitations with respect to educational and quality improvement value. Nonrandom peer review offers an alternative approach in which diagnostic error cases are targeted for collection during routine clinical practice. The objective of this study was to compare error cases identified through random and nonrandom peer review approaches at an academic center. During the 1-year study period, the number of discrepancy cases and score of discrepancy were determined from each approach. The nonrandom peer review process collected 190 cases, of which 60 were scored as 2 (minor discrepancy), 94 as 3 (significant discrepancy), and 36 as 4 (major discrepancy). In the random peer review process, 1,690 cases were reviewed, of which 1,646 were scored as 1 (no discrepancy), 44 were scored as 2 (minor discrepancy), and none were scored as 3 or 4. Several teaching lessons and quality improvement measures were developed as a result of analysis of error cases collected through the nonrandom peer review process. Our experience supports the implementation of nonrandom peer review as a replacement to random peer review, with nonrandom peer review serving as a more effective method for collecting diagnostic error cases with educational and quality improvement value. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Improving waveform inversion using modified interferometric imaging condition
NASA Astrophysics Data System (ADS)
Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen
2017-12-01
Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.
Improving waveform inversion using modified interferometric imaging condition
NASA Astrophysics Data System (ADS)
Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen
2018-02-01
Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.
NASA Astrophysics Data System (ADS)
Nickelsen, Daniel
2017-07-01
The statistics of velocity increments in homogeneous and isotropic turbulence exhibit universal features in the limit of infinite Reynolds numbers. After Kolmogorov’s scaling law from 1941, many turbulence models aim for capturing these universal features, some are known to have an equivalent formulation in terms of Markov processes. We derive the Markov process equivalent to the particularly successful scaling law postulated by She and Leveque. The Markov process is a jump process for velocity increments u(r) in scale r in which the jumps occur randomly but with deterministic width in u. From its master equation we establish a prescription to simulate the She-Leveque process and compare it with Kolmogorov scaling. To put the She-Leveque process into the context of other established turbulence models on the Markov level, we derive a diffusion process for u(r) using two properties of the Navier-Stokes equation. This diffusion process already includes Kolmogorov scaling, extended self-similarity and a class of random cascade models. The fluctuation theorem of this Markov process implies a ‘second law’ that puts a loose bound on the multipliers of the random cascade models. This bound explicitly allows for instances of inverse cascades, which are necessary to satisfy the fluctuation theorem. By adding a jump process to the diffusion process, we go beyond Kolmogorov scaling and formulate the most general scaling law for the class of Markov processes having both diffusion and jump parts. This Markov scaling law includes She-Leveque scaling and a scaling law derived by Yakhot.
ERIC Educational Resources Information Center
Hanisch, Charlotte; Hautmann, Christopher; Plück, Julia; Eichelberger, Ilka; Döpfner, Manfred
2014-01-01
Background: Our indicated Prevention program for preschool children with Externalizing Problem behavior (PEP) demonstrated improved parenting and child problem behavior in a randomized controlled efficacy trial and in a study with an effectiveness design. The aim of the present analysis of data from the randomized controlled trial was to identify…
Spread of information and infection on finite random networks
NASA Astrophysics Data System (ADS)
Isham, Valerie; Kaczmarska, Joanna; Nekovee, Maziar
2011-04-01
The modeling of epidemic-like processes on random networks has received considerable attention in recent years. While these processes are inherently stochastic, most previous work has been focused on deterministic models that ignore important fluctuations that may persist even in the infinite network size limit. In a previous paper, for a class of epidemic and rumor processes, we derived approximate models for the full probability distribution of the final size of the epidemic, as opposed to only mean values. In this paper we examine via direct simulations the adequacy of the approximate model to describe stochastic epidemics and rumors on several random network topologies: homogeneous networks, Erdös-Rényi (ER) random graphs, Barabasi-Albert scale-free networks, and random geometric graphs. We find that the approximate model is reasonably accurate in predicting the probability of spread. However, the position of the threshold and the conditional mean of the final size for processes near the threshold are not well described by the approximate model even in the case of homogeneous networks. We attribute this failure to the presence of other structural properties beyond degree-degree correlations, and in particular clustering, which are present in any finite network but are not incorporated in the approximate model. In order to test this “hypothesis” we perform additional simulations on a set of ER random graphs where degree-degree correlations and clustering are separately and independently introduced using recently proposed algorithms from the literature. Our results show that even strong degree-degree correlations have only weak effects on the position of the threshold and the conditional mean of the final size. On the other hand, the introduction of clustering greatly affects both the position of the threshold and the conditional mean. Similar analysis for the Barabasi-Albert scale-free network confirms the significance of clustering on the dynamics of rumor spread. For this network, though, with its highly skewed degree distribution, the addition of positive correlation had a much stronger effect on the final size distribution than was found for the simple random graph.
Radiation Transport in Random Media With Large Fluctuations
NASA Astrophysics Data System (ADS)
Olson, Aaron; Prinja, Anil; Franke, Brian
2017-09-01
Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.
Wright, David L; Magnuson, Curt E; Black, Charles B
2005-09-01
Individuals practiced two unique discrete sequence production tasks that differed in their relative time profile in either a blocked or random practice schedule. Each participant was subsequently administered a "precuing" protocol to examine the cost of initially compiling or modifying the plan for an upcoming movement's relative timing. The findings indicated that, in general, random practice facilitated the programming of the required movement timing, and this was accomplished while exhibiting greater accuracy in movement production. Participants exposed to random practice exhibited the greatest motor programming benefit, when a modification to an already prepared movement timing profile was required. When movement timing was only partially constructed prior to the imperative signal, the individuals who were trained in blocked and random practice formats accrued a similar cost to complete the programming process. These data provide additional support for the recent claim of Immink & Wright (2001) that at least some of the benefit from experience in a random as opposed to blocked training context can be localized to superior development and implementation of the motor programming process before executing the movement.
Random walks and diffusion on networks
NASA Astrophysics Data System (ADS)
Masuda, Naoki; Porter, Mason A.; Lambiotte, Renaud
2017-11-01
Random walks are ubiquitous in the sciences, and they are interesting from both theoretical and practical perspectives. They are one of the most fundamental types of stochastic processes; can be used to model numerous phenomena, including diffusion, interactions, and opinions among humans and animals; and can be used to extract information about important entities or dense groups of entities in a network. Random walks have been studied for many decades on both regular lattices and (especially in the last couple of decades) on networks with a variety of structures. In the present article, we survey the theory and applications of random walks on networks, restricting ourselves to simple cases of single and non-adaptive random walkers. We distinguish three main types of random walks: discrete-time random walks, node-centric continuous-time random walks, and edge-centric continuous-time random walks. We first briefly survey random walks on a line, and then we consider random walks on various types of networks. We extensively discuss applications of random walks, including ranking of nodes (e.g., PageRank), community detection, respondent-driven sampling, and opinion models such as voter models.
Face perception is tuned to horizontal orientation in the N170 time window.
Jacques, Corentin; Schiltz, Christine; Goffaux, Valerie
2014-02-07
The specificity of face perception is thought to reside both in its dramatic vulnerability to picture-plane inversion and its strong reliance on horizontally oriented image content. Here we asked when in the visual processing stream face-specific perception is tuned to horizontal information. We measured the behavioral performance and scalp event-related potentials (ERP) when participants viewed upright and inverted images of faces and cars (and natural scenes) that were phase-randomized in a narrow orientation band centered either on vertical or horizontal orientation. For faces, the magnitude of the inversion effect (IE) on behavioral discrimination performance was significantly reduced for horizontally randomized compared to vertically or nonrandomized images, confirming the importance of horizontal information for the recruitment of face-specific processing. Inversion affected the processing of nonrandomized and vertically randomized faces early, in the N170 time window. In contrast, the magnitude of the N170 IE was much smaller for horizontally randomized faces. The present research indicates that the early face-specific neural representations are preferentially tuned to horizontal information and offers new perspectives for a description of the visual information feeding face-specific perception.
NASA Astrophysics Data System (ADS)
Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying
2015-04-01
In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.
Random element method for numerical modeling of diffusional processes
NASA Technical Reports Server (NTRS)
Ghoniem, A. F.; Oppenheim, A. K.
1982-01-01
The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.
Phenomenological picture of fluctuations in branching random walks
NASA Astrophysics Data System (ADS)
Mueller, A. H.; Munier, S.
2014-10-01
We propose a picture of the fluctuations in branching random walks, which leads to predictions for the distribution of a random variable that characterizes the position of the bulk of the particles. We also interpret the 1 /√{t } correction to the average position of the rightmost particle of a branching random walk for large times t ≫1 , computed by Ebert and Van Saarloos, as fluctuations on top of the mean-field approximation of this process with a Brunet-Derrida cutoff at the tip that simulates discreteness. Our analytical formulas successfully compare to numerical simulations of a particular model of a branching random walk.
Use of Analogies in the Study of Diffusion
ERIC Educational Resources Information Center
Letic, Milorad
2014-01-01
Emergent processes, such as diffusion, are considered more difficult to understand than direct processes. In physiology, most processes are presented as direct processes, so emergent processes, when encountered, are even more difficult to understand. It has been suggested that, when studying diffusion, misconceptions about random processes are the…
Single-random-phase holographic encryption of images
NASA Astrophysics Data System (ADS)
Tsang, P. W. M.
2017-02-01
In this paper, a method is proposed for encrypting an optical image onto a phase-only hologram, utilizing a single random phase mask as the private encryption key. The encryption process can be divided into 3 stages. First the source image to be encrypted is scaled in size, and pasted onto an arbitrary position in a larger global image. The remaining areas of the global image that are not occupied by the source image could be filled with randomly generated contents. As such, the global image as a whole is very different from the source image, but at the same time the visual quality of the source image is preserved. Second, a digital Fresnel hologram is generated from the new image, and converted into a phase-only hologram based on bi-directional error diffusion. In the final stage, a fixed random phase mask is added to the phase-only hologram as the private encryption key. In the decryption process, the global image together with the source image it contained, can be reconstructed from the phase-only hologram if it is overlaid with the correct decryption key. The proposed method is highly resistant to different forms of Plain-Text-Attacks, which are commonly used to deduce the encryption key in existing holographic encryption process. In addition, both the encryption and the decryption processes are simple and easy to implement.
Random-order fractional bistable system and its stochastic resonance
NASA Astrophysics Data System (ADS)
Gao, Shilong; Zhang, Li; Liu, Hui; Kan, Bixia
2017-01-01
In this paper, the diffusion motion of Brownian particles in a viscous liquid suffering from stochastic fluctuations of the external environment is modeled as a random-order fractional bistable equation, and as a typical nonlinear dynamic behavior, the stochastic resonance phenomena in this system are investigated. At first, the derivation process of the random-order fractional bistable system is given. In particular, the random-power-law memory is deeply discussed to obtain the physical interpretation of the random-order fractional derivative. Secondly, the stochastic resonance evoked by random-order and external periodic force is mainly studied by numerical simulation. In particular, the frequency shifting phenomena of the periodical output are observed in SR induced by the excitation of the random order. Finally, the stochastic resonance of the system under the double stochastic excitations of the random order and the internal color noise is also investigated.
Generating random numbers by means of nonlinear dynamic systems
NASA Astrophysics Data System (ADS)
Zang, Jiaqi; Hu, Haojie; Zhong, Juhua; Luo, Duanbin; Fang, Yi
2018-07-01
To introduce the randomness of a physical process to students, a chaotic pendulum experiment was opened in East China University of Science and Technology (ECUST) on the undergraduate level in the physics department. It was shown chaotic motion could be initiated through adjusting the operation of a chaotic pendulum. By using the data of the angular displacements of chaotic motion, random binary numerical arrays can be generated. To check the randomness of generated numerical arrays, the NIST Special Publication 800-20 method was adopted. As a result, it was found that all the random arrays which were generated by the chaotic motion could pass the validity criteria and some of them were even better than the quality of pseudo-random numbers generated by a computer. Through the experiments, it is demonstrated that chaotic pendulum can be used as an efficient mechanical facility in generating random numbers, and can be applied in teaching random motion to the students.
Open quantum random walks: Bistability on pure states and ballistically induced diffusion
NASA Astrophysics Data System (ADS)
Bauer, Michel; Bernard, Denis; Tilloy, Antoine
2013-12-01
Open quantum random walks (OQRWs) deal with quantum random motions on a line for systems with internal and orbital degrees of freedom. The internal system behaves as a quantum random gyroscope coding for the direction of the orbital moves. We reveal the existence of a transition, depending on OQRW moduli, in the internal system behaviors from simple oscillations to random flips between two unstable pure states. This induces a transition in the orbital motions from the usual diffusion to ballistically induced diffusion with a large mean free path and large effective diffusion constant at large times. We also show that mixed states of the internal system are converted into random pure states during the process. We touch upon possible experimental realizations.
1983-03-01
the Naval Postgraduate School. As my *advisor, Prof. Gaver suggested and derived the Brownian bridge, as well as nudged me in the right direction when...the * random tour process by deriving the mean square radial distance for a random tour with arbitrary course change distribution to be: EECR I2(V / 2...random tour model, li = Iy = 8, and equation (3)x y results as expected. The notion of an arbitrary course change distribution is important because the
RANDOM EVOLUTIONS, MARKOV CHAINS, AND SYSTEMS OF PARTIAL DIFFERENTIAL EQUATIONS
Griego, R. J.; Hersh, R.
1969-01-01
Several authors have considered Markov processes defined by the motion of a particle on a fixed line with a random velocity1, 6, 8, 10 or a random diffusivity.5, 12 A “random evolution” is a natural but apparently new generalization of this notion. In this note we hope to show that this concept leads to simple and powerful applications of probabilistic tools to initial-value problems of both parabolic and hyperbolic type. We obtain existence theorems, representation theorems, and asymptotic formulas, both old and new. PMID:16578690
Quantum random walks on congested lattices and the effect of dephasing.
Motes, Keith R; Gilchrist, Alexei; Rohde, Peter P
2016-01-27
We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled on lattices that contain static defects which reverse the walker's direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walks as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices.
Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.
Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M
2014-02-10
Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.
Poly-Gaussian model of randomly rough surface in rarefied gas flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksenova, Olga A.; Khalidov, Iskander A.
2014-12-09
Surface roughness is simulated by the model of non-Gaussian random process. Our results for the scattering of rarefied gas atoms from a rough surface using modified approach to the DSMC calculation of rarefied gas flow near a rough surface are developed and generalized applying the poly-Gaussian model representing probability density as the mixture of Gaussian densities. The transformation of the scattering function due to the roughness is characterized by the roughness operator. Simulating rough surface of the walls by the poly-Gaussian random field expressed as integrated Wiener process, we derive a representation of the roughness operator that can be appliedmore » in numerical DSMC methods as well as in analytical investigations.« less
Time distributions of solar energetic particle events: Are SEPEs really random?
NASA Astrophysics Data System (ADS)
Jiggens, P. T. A.; Gabriel, S. B.
2009-10-01
Solar energetic particle events (SEPEs) can exhibit flux increases of several orders of magnitude over background levels and have always been considered to be random in nature in statistical models with no dependence of any one event on the occurrence of previous events. We examine whether this assumption of randomness in time is correct. Engineering modeling of SEPEs is important to enable reliable and efficient design of both Earth-orbiting and interplanetary spacecraft and future manned missions to Mars and the Moon. All existing engineering models assume that the frequency of SEPEs follows a Poisson process. We present analysis of the event waiting times using alternative distributions described by Lévy and time-dependent Poisson processes and compared these with the usual Poisson distribution. The results show significant deviation from a Poisson process and indicate that the underlying physical processes might be more closely related to a Lévy-type process, suggesting that there is some inherent “memory” in the system. Inherent Poisson assumptions of stationarity and event independence are investigated, and it appears that they do not hold and can be dependent upon the event definition used. SEPEs appear to have some memory indicating that events are not completely random with activity levels varying even during solar active periods and are characterized by clusters of events. This could have significant ramifications for engineering models of the SEP environment, and it is recommended that current statistical engineering models of the SEP environment should be modified to incorporate long-term event dependency and short-term system memory.
Applying a weighted random forests method to extract karst sinkholes from LiDAR data
NASA Astrophysics Data System (ADS)
Zhu, Junfeng; Pierskalla, William P.
2016-02-01
Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.
Some practical problems in implementing randomization.
Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet
2010-06-01
While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.
Common Randomness Principles of Secrecy
ERIC Educational Resources Information Center
Tyagi, Himanshu
2013-01-01
This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...
2012-05-01
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
Effect of Store and Forward Teledermatology on Quality of Life
Whited, John D.; Warshaw, Erin M.; Edison, Karen E.; Kapur, Kush; Thottapurathu, Lizy; Raju, Srihari; Cook, Bethany; Engasser, Holly; Pullen, Samantha; Parks, Patricia; Sindowski, Tom; Motyka, Danuta; Brown, Rodney; Moritz, Thomas E.; Datta, Santanu K.; Chren, Mary-Margaret; Marty, Lucinda; Reda, Domenic J.
2013-01-01
Importance Although research on quality of life and dermatologic conditions is well represented in the literature, information on teledermatology’s effect on quality of life is virtually absent. Objective To determine the effect of store and forward teledermatology on quality of life. Design Two-site, parallel-group, superiority randomized controlled trial. Setting Dermatology clinics and affiliated sites of primary care at 2 US Department of Veterans Affairs medical facilities. Participants Patients being referred to a dermatology clinic were randomly assigned, stratified by site, to teledermatology or the conventional consultation process. Among the 392 patients who met the inclusion criteria and were randomized, 326 completed the allocated intervention and were included in the analysis. Interventions Store and forward teledermatology (digital images and a standardized history) or conventional text-based consultation processes were used to manage the dermatology consultations. Patients were followed up for 9 months. Main Outcome Measures The primary end point was change in Skindex-16 scores, a skin-specific quality-of-life instrument, between baseline and 9 months. A secondary end point was change in Skindex-16 scores between baseline and 3 months. Results Patients in both randomization groups demonstrated a clinically significant improvement in Skindex-16 scores between baseline and 9 months with no significant difference by randomization group (P=.66, composite score). No significant difference in Skindex-16 scores by randomization group between baseline and 3 months was found (P=.39, composite score). Conclusions Compared with the conventional consultation process, store and forward teledermatology did not result in a statistically significant difference in skin-related quality of life at 3 or 9 months after referral. Trial Registration clinicaltrials.gov Identifier: NCT00488293 PMID:23426111
A Hybrid Process Fidelity Assessment in a Home-based Randomized Clinical Trial
WILDE, MARY H.; LIEBEL, DIANNE; FAIRBANKS, EILEEN; WILSON, PAULA; LASH, MARGARET; SHAH, SHIVANI; McDONALD, MARGARET V.; BRASCH, JUDITH; ZHANG, FENG; SCHEID, EILEEN; McMAHON, JAMES M.
2016-01-01
A process fidelity assessment was conducted as a nested study within a home-based randomized clinical trial teaching self-management to 101 long-term indwelling urinary catheter users in the treatment group. Our hybrid model combined external assessments (outside observations and tape recordings) with internal evaluation methods (through study nurse forms and notes) for a comprehensive process fidelity assessment. Barriers, patient-related issues, and nurse perspectives were identified demonstrating the complexity in home care intervention research. The complementary and synergistic approaches provided in depth information about the context of the delivery and the impact of the intervention on study outcomes. PMID:25894688
Three-level sampler having automated thresholds
NASA Technical Reports Server (NTRS)
Jurgens, R. F.
1976-01-01
A three-level sampler is described that has its thresholds controlled automatically so as to track changes in the statistics of the random process being sampled. In particular, the mean value is removed and the ratio of the standard deviation of the random process to the threshold is maintained constant. The system is configured in such a manner that slow drifts in the level comparators and digital-to-analog converters are also removed. The ratio of the standard deviation to threshold level may be chosen within the constraints of the ratios of two integers N and M. These may be chosen to minimize the quantizing noise of the sampled process.
NASA Technical Reports Server (NTRS)
Frehlich, Rod
1993-01-01
Calculations of the exact Cramer-Rao Bound (CRB) for unbiased estimates of the mean frequency, signal power, and spectral width of Doppler radar/lidar signals (a Gaussian random process) are presented. Approximate CRB's are derived using the Discrete Fourier Transform (DFT). These approximate results are equal to the exact CRB when the DFT coefficients are mutually uncorrelated. Previous high SNR limits for CRB's are shown to be inaccurate because the discrete summations cannot be approximated with integration. The performance of an approximate maximum likelihood estimator for mean frequency approaches the exact CRB for moderate signal to noise ratio and moderate spectral width.
2013-01-01
Background Chronic fatigue syndrome (CFS) or myalgic encephalomyelitis (ME) is relatively common in children with limited evidence for treatment. The Phil Parker Lightning Process (LP) is a trademarked intervention, which >250 children use annually. There are no reported studies investigating the effectiveness or possible side effects of LP. Methods The trial population was drawn from the Bath and Bristol NHS specialist paediatric CFS or ME service. The study was designed as a pilot randomized trial with children (aged 12 to 18 years) comparing specialist medical care with specialist medical care plus the Lightning Process. Integrated qualitative methodology was used to explore the feasibility and acceptability of the recruitment, randomization and interventions. Results A total of 56 children were recruited from 156 eligible children (1 October 2010 to 16 June 2012). Recruitment, randomization and both interventions were feasible and acceptable. Participants suggested changes to improve feasibility and acceptability and we incorporated the following in the trial protocol: stopped collecting 6-week outcomes; introduced a second reminder letter; used phone calls to collect primary outcomes from nonresponders; informed participants about different approaches of each intervention and changed our recommendation for the primary outcome for the full study from school attendance to disability (SF-36 physical function subscale) and fatigue (Chalder Fatigue Scale). Conclusions Conducting randomized controlled trials (RCTs) to investigate an alternative treatment such as LP is feasible and acceptable for children with CFS or ME. Feasibility studies that incorporate qualitative methodology enable changes to be made to trial protocols to improve acceptability to participants. This is likely to improve recruitment rate and trial retention. Trial registration Feasibility study first randomization: 29 September 2010. Trial registration: Current Controlled Trials ISRCTN81456207 (31 July 2012). Full trial first randomization: 19 September 2012. PMID:24304689
Cappell, M S; Spray, D C; Bennett, M V
1988-06-28
Protractor muscles in the gastropod mollusc Navanax inermis exhibit typical spontaneous miniature end plate potentials with mean amplitude 1.71 +/- 1.19 (standard deviation) mV. The evoked end plate potential is quantized, with a quantum equal to the miniature end plate potential amplitude. When their rate is stationary, occurrence of miniature end plate potentials is a random, Poisson process. When non-stationary, spontaneous miniature end plate potential occurrence is a non-stationary Poisson process, a Poisson process with the mean frequency changing with time. This extends the random Poisson model for miniature end plate potentials to the frequently observed non-stationary occurrence. Reported deviations from a Poisson process can sometimes be accounted for by the non-stationary Poisson process and more complex models, such as clustered release, are not always needed.
Subjective randomness as statistical inference.
Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B
2018-06-01
Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.
Srivastava, Ayush; Srivastava, Anurag; Pandey, Ravindra M
2017-10-01
Randomized controlled trials have become the most respected scientific tool to measure the effectiveness of a medical therapy. The design, conduct and analysis of randomized controlled trials were developed by Sir Ronald A. Fisher, a mathematician in Great Britain. Fisher propounded that the process of randomization would equally distribute all the known and even unknown covariates in the two or more comparison groups, so that any difference observed could be ascribed to treatment effect. Today, we observe that in many situations, this prediction of Fisher does not stand true; hence, adaptive randomization schedules have been designed to adjust for major imbalance in important covariates. Present essay unravels some weaknesses inherent in Fisherian concept of randomized controlled trial.
ERIC Educational Resources Information Center
Lavenda, Bernard H.
1985-01-01
Explains the phenomenon of Brownian motion, which serves as a mathematical model for random processes. Topics addressed include kinetic theory, Einstein's theory, particle displacement, and others. Points out that observations of the random course of a particle suspended in fluid led to the first accurate measurement of atomic mass. (DH)
An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System
NASA Astrophysics Data System (ADS)
Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed
PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.
Probing the stochastic, motor-driven properties of the cytoplasm using force spectrum microscopy
Guo, Ming; Ehrlicher, Allen J.; Jensen, Mikkel H.; Renz, Malte; Moore, Jeffrey R.; Goldman, Robert D.; Lippincott-Schwartz, Jennifer; Mackintosh, Frederick C.; Weitz, David A.
2014-01-01
SUMMARY Molecular motors in cells typically produce highly directed motion; however, the aggregate, incoherent effect of all active processes also creates randomly fluctuating forces, which drive diffusive-like, non-thermal motion. Here we introduce force-spectrum-microscopy (FSM) to directly quantify random forces within the cytoplasm of cells and thereby probe stochastic motor activity. This technique combines measurements of the random motion of probe particles with independent micromechanical measurements of the cytoplasm to quantify the spectrum of force fluctuations. Using FSM, we show that force fluctuations substantially enhance intracellular movement of small and large components. The fluctuations are three times larger in malignant cells than in their benign counterparts. We further demonstrate that vimentin acts globally to anchor organelles against randomly fluctuating forces in the cytoplasm, with no effect on their magnitude. Thus, FSM has broad applications for understanding the cytoplasm and its intracellular processes in relation to cell physiology in healthy and diseased states. PMID:25126787
Transfer and alignment of random single-walled carbon nanotube films by contact printing.
Liu, Huaping; Takagi, Daisuke; Chiashi, Shohei; Homma, Yoshikazu
2010-02-23
We present a simple method to transfer large-area random single-walled carbon nanotube (SWCNT) films grown on SiO(2) substrates onto another surface through a simple contact printing process. The transferred random SWCNT films can be assembled into highly ordered, dense regular arrays with high uniformity and reproducibility by sliding the growth substrate during the transfer process. The position of the transferred SWCNT film can be controlled by predefined patterns on the receiver substrates. The process is compatible with a variety of substrates, and even metal meshes for transmission electron microscopy (TEM) can be used as receiver substrates. Thus, suspended web-like SWCNT networks and aligned SWCNT arrays can be formed over the grids of TEM meshes, so that the structures of the transferred SWCNTs can be directly observed by TEM. This simple technique can be used to controllably transfer SWCNTs for property studies, for the fabrication of devices, or even as support films for TEM meshes.
NASA Astrophysics Data System (ADS)
Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang
2016-04-01
This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.
Karupaiah, Tilakavati; Sundram, Kalyana
2007-01-01
Most studies on lipid lowering diets have focused on the total content of saturated, polyunsaturated and monounsaturated fatty acids. However, the distribution of these fatty acids on the triacylglycerol (TAG) molecule and the molecular TAG species generated by this stereospecificity are characteristic for various native dietary TAGs. Fat randomization or interesterification is a process involving the positional redistribution of fatty acids, which leads to the generation of new TAG molecular species. A comparison between native and randomized TAGs is the subject of this review with regards to the role of stereospecificity of fatty acids in metabolic processing and effects on fasting lipids and postprandial lipemia. The positioning of unsaturated versus saturated fatty acids in the sn-2 position of TAGs indicate differences in early metabolic processing and postprandial clearance, which may explain modulatory effects on atherogenecity and thrombogenecity. Both human and animal studies are discussed with implications for human health. PMID:17625019
Large deviations and mixing for dissipative PDEs with unbounded random kicks
NASA Astrophysics Data System (ADS)
Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.
2018-02-01
We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2004-04-01
Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.
Sato, Aki-Hiro
2004-04-01
Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.
Implementation of a quantum random number generator based on the optimal clustering of photocounts
NASA Astrophysics Data System (ADS)
Balygin, K. A.; Zaitsev, V. I.; Klimov, A. N.; Kulik, S. P.; Molotkov, S. N.
2017-10-01
To implement quantum random number generators, it is fundamentally important to have a mathematically provable and experimentally testable process of measurements of a system from which an initial random sequence is generated. This makes sure that randomness indeed has a quantum nature. A quantum random number generator has been implemented with the use of the detection of quasi-single-photon radiation by a silicon photomultiplier (SiPM) matrix, which makes it possible to reliably reach the Poisson statistics of photocounts. The choice and use of the optimal clustering of photocounts for the initial sequence of photodetection events and a method of extraction of a random sequence of 0's and 1's, which is polynomial in the length of the sequence, have made it possible to reach a yield rate of 64 Mbit/s of the output certainly random sequence.
Open quantum random walk in terms of quantum Bernoulli noise
NASA Astrophysics Data System (ADS)
Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling
2018-03-01
In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.
Quantum random walks on congested lattices and the effect of dephasing
Motes, Keith R.; Gilchrist, Alexei; Rohde, Peter P.
2016-01-01
We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled on lattices that contain static defects which reverse the walker’s direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walks as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices. PMID:26812924
NASA Astrophysics Data System (ADS)
Ram, Jasa; Ghosal, Partha
2015-08-01
Randomly distributed nanotubes, nanorods and nanoplates of Bi0.5Sb1.5Te3 and Bi2Te2.7Se0.3 ternary compounds have been synthesized via a high yield solvo-thermal process. Prior to solvo-thermal heating at 230 °C for crystallization, we ensured molecular legation in room temperature reaction by complete reduction of precursor materials, dissolved in ethylene glycol and confirmed it by replicating Raman spectra of amorphous and crystalline materials. These nanomaterials have also been characterized using XRD, FE-SEM, EDS and TEM. Possible formation mechanism is also discussed. This single process will enable development of thermoelectric modules and random distribution of diverse morphology will be beneficial in retaining nano-crystallite sizes.
Order out of Randomness: Self-Organization Processes in Astrophysics
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Scholkmann, Felix; Béthune, William; Schmutz, Werner; Abramenko, Valentina; Cheung, Mark C. M.; Müller, Daniel; Benz, Arnold; Chernov, Guennadi; Kritsuk, Alexei G.; Scargle, Jeffrey D.; Melatos, Andrew; Wagoner, Robert V.; Trimble, Virginia; Green, William H.
2018-03-01
Self-organization is a property of dissipative nonlinear processes that are governed by a global driving force and a local positive feedback mechanism, which creates regular geometric and/or temporal patterns, and decreases the entropy locally, in contrast to random processes. Here we investigate for the first time a comprehensive number of (17) self-organization processes that operate in planetary physics, solar physics, stellar physics, galactic physics, and cosmology. Self-organizing systems create spontaneous " order out of randomness", during the evolution from an initially disordered system to an ordered quasi-stationary system, mostly by quasi-periodic limit-cycle dynamics, but also by harmonic (mechanical or gyromagnetic) resonances. The global driving force can be due to gravity, electromagnetic forces, mechanical forces (e.g., rotation or differential rotation), thermal pressure, or acceleration of nonthermal particles, while the positive feedback mechanism is often an instability, such as the magneto-rotational (Balbus-Hawley) instability, the convective (Rayleigh-Bénard) instability, turbulence, vortex attraction, magnetic reconnection, plasma condensation, or a loss-cone instability. Physical models of astrophysical self-organization processes require hydrodynamic, magneto-hydrodynamic (MHD), plasma, or N-body simulations. Analytical formulations of self-organizing systems generally involve coupled differential equations with limit-cycle solutions of the Lotka-Volterra or Hopf-bifurcation type.
Poisson process stimulation of an excitable membrane cable model.
Goldfinger, M D
1986-01-01
The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505
Population density equations for stochastic processes with memory kernels
NASA Astrophysics Data System (ADS)
Lai, Yi Ming; de Kamps, Marc
2017-06-01
We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.
NASA Astrophysics Data System (ADS)
Peltier, Abigail; Sapkota, Gopal; Potter, Matthew; Busse, Lynda E.; Frantz, Jesse A.; Shaw, L. Brandon; Sanghera, Jasbinder S.; Aggarwal, Ishwar D.; Poutous, Menelaos K.
2017-02-01
Random anti-reflecting subwavelength surface structures (rARSS) have been shown to suppress Fresnel reflection and scatter from optical surfaces. The structures effectively function as a gradient-refractive-index at the substrate boundary, and the spectral transmission properties of the boundary have been shown to depend on the structure's statistical properties (diameter, height, and density.) We fabricated rARSS on fused silica substrates using gold masking. A thin layer of gold was deposited on the surface of the substrate and then subjected to a rapid thermal annealing (RTA) process at various temperatures. This RTA process resulted in the formation of gold "islands" on the surface of the substrate, which then acted as a mask while the substrate was dry etched in a reactive ion etching (RIE) process. The plasma etch yielded a fused silica surface covered with randomly arranged "rods" that act as the anti-reflective layer. We present data relating the physical characteristics of the gold "island" statistical populations, and the resulting rARSS "rod" population, as well as, optical scattering losses and spectral transmission properties of the final surfaces. We focus on comparing results between samples processed at different RTA temperatures, as well as samples fabricated without undergoing RTA, to relate fabrication process statistics to transmission enhancement values.
Differential Susceptibility to Prevention: GABAergic, Dopaminergic, and Multilocus Effects
ERIC Educational Resources Information Center
Brody, Gene H.; Chen, Yi-fu; Beach, Steven R. H.
2013-01-01
Background: Randomized prevention trials provide a unique opportunity to test hypotheses about the interaction of genetic predispositions with contextual processes to create variations in phenotypes over time. Methods: Using two longitudinal, randomized prevention trials, molecular genetic and alcohol use outcome data were gathered from more than…
Application of stochastic processes in random growth and evolutionary dynamics
NASA Astrophysics Data System (ADS)
Oikonomou, Panagiotis
We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. While homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously towards the target phenotype. Moreover, we show that scale-free networks always evolve faster than homogeneous random networks; remarkably, this property does not depend on the precise value of the topological parameter. By contrast, homogeneous random networks require a specific tuning of their topological parameter in order to optimize their fitness. This model suggests that the evolutionary paths of biological networks, punctuated or continuous, may solely be determined by the network topology.
Hoberman, Alejandro; Shaikh, Nader; Bhatnagar, Sonika; Haralam, Mary Ann; Kearney, Diana H; Colborn, D Kathleen; Kienholz, Michelle L; Wang, Li; Bunker, Clareann H; Keren, Ron; Carpenter, Myra A; Greenfield, Saul P; Pohl, Hans G; Mathews, Ranjiv; Moxey-Mims, Marva; Chesney, Russell W
2013-06-01
A child's health, positive perceptions of the research team and consent process, and altruistic motives play significant roles in the decision-making process for parents who consent for their child to enroll in clinical research. This study identified that nonconsenting parents were better educated, had private insurance, showed lower levels of altruism, and less understanding of study design. To determine the factors associated with parental consent for their child's participation in a randomized, placebo-controlled trial. Cross-sectional survey conducted from July 2008 to May 2011. The survey was an ancillary study to the Randomized Intervention for Children with VesicoUreteral Reflux Study. Seven children's hospitals participating in a randomized trial evaluating management of children with vesicoureteral reflux. Parents asked to provide consent for their child's participation in the randomized trial were invited to complete an anonymous online survey about factors influencing their decision. A total of 120 of the 271 (44%) invited completed the survey; 58 of 125 (46%) who had provided consent and 62 of 144 (43%) who had declined consent completed the survey. A 60-question survey examining child, parent, and study characteristics; parental perception of the study; understanding of the design; external influences; and decision-making process. RESULTS Having graduated from college and private health insurance were associated with a lower likelihood of providing consent. Parents who perceived the trial as having a low degree of risk, resulting in greater benefit to their child and other children, causing little interference with standard care, or exhibiting potential for enhanced care, or who perceived the researcher as professional were significantly more likely to consent to participate. Higher levels of understanding of the randomization process, blinding, and right to withdraw were significantly positively associated with consent to participate. CONCLUSIONS AND RELEVANCE Parents who declined consent had a relatively higher socioeconomic status, had more anxiety about their decision, and found it harder to make their decision compared with consenting parents, who had higher levels of trust and altruism, perceived the potential for enhanced care, reflected better understanding of randomization, and exhibited low decisional uncertainty. Consideration of the factors included in the conceptual model should enhance the quality of the informed consent process and improve participation in pediatric clinical trials.
Electronic Noise and Fluctuations in Solids
NASA Astrophysics Data System (ADS)
Kogan, Sh.
2008-07-01
Preface; Part I. Introduction. Some Basic Concepts of the Theory of Random Processes: 1. Probability density functions. Moments. Stationary processes; 2. Correlation function; 3. Spectral density of noise; 4. Ergodicity and nonergodicity of random processes; 5. Random pulses and shot noise; 6. Markov processes. General theory; 7. Discrete Markov processes. Random telegraph noise; 8. Quasicontinuous (Diffusion-like) Markov processes; 9. Brownian motion; 10. Langevin approach to the kinetics of fluctuations; Part II. Fluctuation-Dissipation Relations in Equilibrium Systems: 11. Derivation of fluctuation-dissipation relations; 12. Equilibrium noise in quasistationary circuits. Nyquist theorem; 13. Fluctuations of electromagnetic fields in continuous media; Part III. Fluctuations in Nonequilibrium Gases: 14. Some basic concepts of hot-electrons' physics; 15. Simple model of current fluctuations in a semiconductor with hot electrons; 16. General kinetic theory of quasiclassical fluctuations in a gas of particles. The Boltzmann-Langevin equation; 17. Current fluctuations and noise temperature; 18. Current fluctuations and diffusion in a gas of hot electrons; 19. One-time correlation in nonequilibrium gases; 20. Intervalley noise in multivalley semiconductors; 21. Noise of hot electrons emitting optical phonons in the streaming regime; 22. Noise in a semiconductor with a postbreakdown stable current filament; Part IV. Generation-recombination noise: 23. G-R noise in uniform unipolar semiconductors; 24. Noise produced by recombination and diffusion; Part V. Noise in quantum ballistic systems: 25. Introduction; 26. Equilibrium noise and shot noise in quantum conductors; 27. Modulation noise in quantum point contacts; 28. Transition from a ballistic conductor to a macroscopic one; 29. Noise in tunnel junctions; Part VI. Resistance noise in metals: 30. Incoherent scattering of electrons by mobile defects; 31. Effect of mobile scattering centers on the electron interference pattern; 32. Fluctuations of the number of diffusing scattering centers; 33. Temperature fluctuations and the corresponding noise; Part VII. Noise in strongly disordered conductors: 34. Basic ideas of the percolation theory; 35. Resistance fluctuations in percolation systems. 36. Experiments; Part VIII. Low-frequency noise with an 1/f-type spectrum and random telegraph noise: 37. Introduction; 38. Some general properties of 1/f noise; 39. Basic models of 1/f noise; 40./f noise in metals; 41. Low-frequency noise in semiconductors; 42. Magnetic noise in spin glasses and some other magnetic systems; 43. Temperature fluctuations as a possible source of 1/f noise; 44. Random telegraph noise; 45. Fluctuations with 1/f spectrum in other systems; 46. General conclusions on 1/f noise; Part IX. Noise in Superconductors and Superconducting Structures: 47. Noise in Josephson junctions; 48. Noise in type II superconductors; References; Subject index.
Low-pass parabolic FFT filter for airborne and satellite lidar signal processing.
Jiao, Zhongke; Liu, Bo; Liu, Enhai; Yue, Yongjian
2015-10-14
In order to reduce random errors of the lidar signal inversion, a low-pass parabolic fast Fourier transform filter (PFFTF) was introduced for noise elimination. A compact airborne Raman lidar system was studied, which applied PFFTF to process lidar signals. Mathematics and simulations of PFFTF along with low pass filters, sliding mean filter (SMF), median filter (MF), empirical mode decomposition (EMD) and wavelet transform (WT) were studied, and the practical engineering value of PFFTF for lidar signal processing has been verified. The method has been tested on real lidar signal from Wyoming Cloud Lidar (WCL). Results show that PFFTF has advantages over the other methods. It keeps the high frequency components well and reduces much of the random noise simultaneously for lidar signal processing.
Michael, Claire W; Naik, Kalyani; McVicker, Michael
2013-05-01
We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.
Iterative dip-steering median filter
NASA Astrophysics Data System (ADS)
Huo, Shoudong; Zhu, Weihong; Shi, Taikun
2017-09-01
Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.
Due Process in Appraisal: A Quasi-Experiment in Procedural Justice.
ERIC Educational Resources Information Center
Taylor, M. Susan; And Others
1995-01-01
Extended research on procedural justice by examining effects of a due-process performance-appraisal system on (government) employees' and managers' reactions. Employee-management pairs were randomly assigned to either a due-process appraisal system or the existing one. Although due-process employees received lower evaluations, both employees and…
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei
2018-03-01
The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.
Direct generation of all-optical random numbers from optical pulse amplitude chaos.
Li, Pu; Wang, Yun-Cai; Wang, An-Bang; Yang, Ling-Zhen; Zhang, Ming-Jiang; Zhang, Jian-Zhong
2012-02-13
We propose and theoretically demonstrate an all-optical method for directly generating all-optical random numbers from pulse amplitude chaos produced by a mode-locked fiber ring laser. Under an appropriate pump intensity, the mode-locked laser can experience a quasi-periodic route to chaos. Such a chaos consists of a stream of pulses with a fixed repetition frequency but random intensities. In this method, we do not require sampling procedure and external triggered clocks but directly quantize the chaotic pulses stream into random number sequence via an all-optical flip-flop. Moreover, our simulation results show that the pulse amplitude chaos has no periodicity and possesses a highly symmetric distribution of amplitude. Thus, in theory, the obtained random number sequence without post-processing has a high-quality randomness verified by industry-standard statistical tests.
High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole
NASA Astrophysics Data System (ADS)
Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei
2018-01-01
Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb ×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits /s , with a failure probability less than 10-5. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.
Validity as Process: A Construct Driven Measure of Fidelity of Implementation
ERIC Educational Resources Information Center
Jones, Ryan Seth
2013-01-01
Estimates of fidelity of implementation are essential to interpret the effects of educational interventions in randomized controlled trials (RCTs). While random assignment protects against many threats to validity, and therefore provides the best approximation to a true counterfactual condition, it does not ensure that the treatment condition…
Reactions and Transport: Diffusion, Inertia, and Subdiffusion
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Fedotov, Sergei; Horsthemke, Werner
Particles, such as molecules, atoms, or ions, and individuals, such as cells or animals, move in space driven by various forces or cues. In particular, particles or individuals can move randomly, undergo velocity jump processes or spatial jump processes [333]. The steps of the random walk can be independent or correlated, unbiased or biased. The probability density function (PDF) for the jump length can decay rapidly or exhibit a heavy tail. Similarly, the PDF for the waiting time between successive jumps can decay rapidly or exhibit a heavy tail. We will discuss these various possibilities in detail in Chap. 3. Below we provide an introduction to three transport processes: standard diffusion, transport with inertia, and anomalous diffusion.
Simulated annealing in networks for computing possible arrangements for red and green cones
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1987-01-01
Attention is given to network models in which each of the cones of the retina is given a provisional color at random, and then the cones are allowed to determine the colors of their neighbors through an iterative process. A symmetric-structure spin-glass model has allowed arrays to be generated from completely random arrangements of red and green to arrays with approximately as much disorder as the parafoveal cones. Simulated annealing has also been added to the process in an attempt to generate color arrangements with greater regularity and hence more revealing moirepatterns than than the arrangements yielded by quenched spin-glass processes. Attention is given to the perceptual implications of these results.
An Overview of Randomization and Minimization Programs for Randomized Clinical Trials
Saghaei, Mahmoud
2011-01-01
Randomization is an essential component of sound clinical trials, which prevents selection biases and helps in blinding the allocations. Randomization is a process by which subsequent subjects are enrolled into trial groups only by chance, which essentially eliminates selection biases. A serious consequence of randomization is severe imbalance among the treatment groups with respect to some prognostic factors, which invalidate the trial results or necessitate complex and usually unreliable secondary analysis to eradicate the source of imbalances. Minimization on the other hand tends to allocate in such a way as to minimize the differences among groups, with respect to prognostic factors. Pure minimization is therefore completely deterministic, that is, one can predict the allocation of the next subject by knowing the factor levels of a previously enrolled subject and having the properties of the next subject. To eliminate the predictability of randomization, it is necessary to include some elements of randomness in the minimization algorithms. In this article brief descriptions of randomization and minimization are presented followed by introducing selected randomization and minimization programs. PMID:22606659
Graphic matching based on shape contexts and reweighted random walks
NASA Astrophysics Data System (ADS)
Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun
2018-04-01
Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.
Random ambience using high fidelity images
NASA Astrophysics Data System (ADS)
Abu, Nur Azman; Sahib, Shahrin
2011-06-01
Most of the secure communication nowadays mandates true random keys as an input. These operations are mostly designed and taken care of by the developers of the cryptosystem. Due to the nature of confidential crypto development today, pseudorandom keys are typically designed and still preferred by the developers of the cryptosystem. However, these pseudorandom keys are predictable, periodic and repeatable, hence they carry minimal entropy. True random keys are believed to be generated only via hardware random number generators. Careful statistical analysis is still required to have any confidence the process and apparatus generates numbers that are sufficiently random to suit the cryptographic use. In this underlying research, each moment in life is considered unique in itself. The random key is unique for the given moment generated by the user whenever he or she needs the random keys in practical secure communication. An ambience of high fidelity digital image shall be tested for its randomness according to the NIST Statistical Test Suite. Recommendation on generating a simple 4 megabits per second random cryptographic keys live shall be reported.
Single-shot stand-off chemical identification of powders using random Raman lasing
Hokr, Brett H.; Bixler, Joel N.; Noojin, Gary D.; Thomas, Robert J.; Rockwell, Benjamin A.; Yakovlev, Vladislav V.; Scully, Marlan O.
2014-01-01
The task of identifying explosives, hazardous chemicals, and biological materials from a safe distance is the subject we consider. Much of the prior work on stand-off spectroscopy using light has been devoted to generating a backward-propagating beam of light that can be used drive further spectroscopic processes. The discovery of random lasing and, more recently, random Raman lasing provide a mechanism for remotely generating copious amounts of chemically specific Raman scattered light. The bright nature of random Raman lasing renders directionality unnecessary, allowing for the detection and identification of chemicals from large distances in real time. In this article, the single-shot remote identification of chemicals at kilometer-scale distances is experimentally demonstrated using random Raman lasing. PMID:25114231
Image discrimination models predict detection in fixed but not random noise
NASA Technical Reports Server (NTRS)
Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)
1997-01-01
By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.
Representation of Reserves Through a Brownian Motion Model
NASA Astrophysics Data System (ADS)
Andrade, M.; Ferreira, M. A. M.; Filipe, J. A.
2012-11-01
The Brownian Motion is commonly used as an approximation for some Random Walks and also for the Classic Risk Process. As the Random Walks and the Classic Risk Process are used frequently as stochastic models to represent reserves, it is natural to consider the Brownian Motion with the same purpose. In this study a model, based on the Brownian Motion, is presented to represent reserves. The Brownian Motion is used in this study to estimate the ruin probability of a fund. This kind of models is considered often in the study of pensions funds.
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
A stylistic classification of Russian-language texts based on the random walk model
NASA Astrophysics Data System (ADS)
Kramarenko, A. A.; Nekrasov, K. A.; Filimonov, V. V.; Zhivoderov, A. A.; Amieva, A. A.
2017-09-01
A formal approach to text analysis is suggested that is based on the random walk model. The frequencies and reciprocal positions of the vowel letters are matched up by a process of quasi-particle migration. Statistically significant difference in the migration parameters for the texts of different functional styles is found. Thus, a possibility of classification of texts using the suggested method is demonstrated. Five groups of the texts are singled out that can be distinguished from one another by the parameters of the quasi-particle migration process.
49 CFR 382.401 - Retention of records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... substances collection process (except calibration of evidential breath testing devices). (3) One year... record is required to be prepared, it must be maintained. (1) Records related to the collection process: (i) Collection logbooks, if used; (ii) Documents relating to the random selection process; (iii...
Transient Oscilliations in Mechanical Systems of Automatic Control with Random Parameters
NASA Astrophysics Data System (ADS)
Royev, B.; Vinokur, A.; Kulikov, G.
2018-04-01
Transient oscillations in mechanical systems of automatic control with random parameters is a relevant but insufficiently studied issue. In this paper, a modified spectral method was applied to investigate the problem. The nature of dynamic processes and the phase portraits are analyzed depending on the amplitude and frequency of external influence. It is evident from the obtained results, that the dynamic phenomena occurring in the systems with random parameters under external influence are complex, and their study requires further investigation.
ERIC Educational Resources Information Center
Hooper, Stephen R.; Costa, Lara-Jeane C.; McBee, Matthew; Anderson, Kathleen L.; Yerby, Donna Carlson; Childress, Amy; Knuth, Sean B.
2013-01-01
In a randomized controlled trial, 205 students were followed from grades 1 to 3 with a focus on changes in their writing trajectories following an evidence-based intervention during the spring of second grade. Students were identified as being at-risk (n = 138), and then randomized into treatment (n = 68) versus business-as-usual conditions (n =…
Synchronizability of random rectangular graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrada, Ernesto, E-mail: ernesto.estrada@strath.ac.uk; Chen, Guanrong
2015-08-15
Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.
A class of generalized Ginzburg-Landau equations with random switching
NASA Astrophysics Data System (ADS)
Wu, Zheng; Yin, George; Lei, Dongxia
2018-09-01
This paper focuses on a class of generalized Ginzburg-Landau equations with random switching. In our formulation, the nonlinear term is allowed to have higher polynomial growth rate than the usual cubic polynomials. The random switching is modeled by a continuous-time Markov chain with a finite state space. First, an explicit solution is obtained. Then properties such as stochastic-ultimate boundedness and permanence of the solution processes are investigated. Finally, two-time-scale models are examined leading to a reduction of complexity.
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis
NASA Astrophysics Data System (ADS)
Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.
IS THE SUICIDE RATE A RANDOM WALK?
Yang, Bijou; Lester, David; Lyke, Jennifer; Olsen, Robert
2015-06-01
The yearly suicide rates for the period 1933-2010 and the daily suicide numbers for 1990 and 1991 were examined for whether the distribution of difference scores (from year to year and from day to day) fitted a normal distribution, a characteristic of stochastic processes that follow a random walk. If the suicide rate were a random walk, then any disturbance to the suicide rate would have a permanent effect and national suicide prevention efforts would likely fail. The distribution of difference scores from day to day (but not the difference scores from year to year) fitted a normal distribution and, therefore, were consistent with a random walk.
Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Qingda, E-mail: weiqd@hqu.edu.cn; Chen, Xian, E-mail: chenxian@amss.ac.cn
In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation andmore » obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.« less
Random walk, diffusion and mixing in simulations of scalar transport in fluid flows
NASA Astrophysics Data System (ADS)
Klimenko, A. Y.
2008-12-01
Physical similarity and mathematical equivalence of continuous diffusion and particle random walk form one of the cornerstones of modern physics and the theory of stochastic processes. In many applied models used in simulation of turbulent transport and turbulent combustion, mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. We show that the continuous scalar transport and diffusion can be accurately specified by means of mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. This gives an alternative formulation for the stochastic process which is selected to represent the continuous diffusion. This paper focuses on statistical errors and deals with relatively simple cases, where one-particle distributions are sufficient for a complete description of the problem.
NASA Astrophysics Data System (ADS)
Wang, Jun; Li, Xiaowei; Hu, Yuhen; Wang, Qiong-Hua
2018-03-01
A phase-retrieval attack free cryptosystem based on the cylindrical asymmetric diffraction and double-random phase encoding (DRPE) is proposed. The plaintext is abstract as a cylinder, while the observed diffraction and holographic surfaces are concentric cylinders. Therefore, the plaintext can be encrypted through a two-step asymmetric diffraction process with double pseudo random phase masks located on the object surface and the first diffraction surface. After inverse diffraction from a holographic surface to an object surface, the plaintext can be reconstructed using a decryption process. Since the diffraction propagated from the inner cylinder to the outer cylinder is different from that of the reversed direction, the proposed cryptosystem is asymmetric and hence is free of phase-retrieval attack. Numerical simulation results demonstrate the flexibility and effectiveness of the proposed cryptosystem.
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
NASA Astrophysics Data System (ADS)
Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.
2013-12-01
A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)
Relationships between digital signal processing and control and estimation theory
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1978-01-01
Research areas associated with digital signal processing and control and estimation theory are identified. Particular attention is given to image processing, system identification problems (parameter identification, linear prediction, least squares, Kalman filtering), stability analyses (the use of the Liapunov theory, frequency domain criteria, passivity), and multiparameter systems, distributed processes, and random fields.
Recruiting Participants for Randomized Controlled Trials
ERIC Educational Resources Information Center
Gallagher, H. Alix; Roschelle, Jeremy; Feng, Mingyu
2014-01-01
The objective of this study was to look across strategies used in a wide range of studies to build a framework for researchers to use in conceptualizing the recruitment process. This paper harvests lessons learned across 19 randomized controlled trials in K-12 school settings conducted by a leading research organization to identify strategies that…
Emotional Intelligence and Life Adjustment for Nigerian Secondary Students
ERIC Educational Resources Information Center
Ogoemeka, Obioma Helen
2013-01-01
In the process of educating adolescents, good emotional development and life adjustment are two significant factors for teachers to know. This study employed random cluster sampling of senior secondary school students in Ondo and Oyo States in south-western Nigeria. The Random sampling was employed to select 1,070 students. The data collected were…
Nonuniform sampling theorems for random signals in the linear canonical transform domain
NASA Astrophysics Data System (ADS)
Shuiqing, Xu; Congmei, Jiang; Yi, Chai; Youqiang, Hu; Lei, Huang
2018-06-01
Nonuniform sampling can be encountered in various practical processes because of random events or poor timebase. The analysis and applications of the nonuniform sampling for deterministic signals related to the linear canonical transform (LCT) have been well considered and researched, but up to now no papers have been published regarding the various nonuniform sampling theorems for random signals related to the LCT. The aim of this article is to explore the nonuniform sampling and reconstruction of random signals associated with the LCT. First, some special nonuniform sampling models are briefly introduced. Second, based on these models, some reconstruction theorems for random signals from various nonuniform samples associated with the LCT have been derived. Finally, the simulation results are made to prove the accuracy of the sampling theorems. In addition, the latent real practices of the nonuniform sampling for random signals have been also discussed.
High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole.
Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei
2018-01-05
Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits/s, with a failure probability less than 10^{-5}. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.
Efficient Quantum Pseudorandomness.
Brandão, Fernando G S L; Harrow, Aram W; Horodecki, Michał
2016-04-29
Randomness is both a useful way to model natural systems and a useful tool for engineered systems, e.g., in computation, communication, and control. Fully random transformations require exponential time for either classical or quantum systems, but in many cases pseudorandom operations can emulate certain properties of truly random ones. Indeed, in the classical realm there is by now a well-developed theory regarding such pseudorandom operations. However, the construction of such objects turns out to be much harder in the quantum case. Here, we show that random quantum unitary time evolutions ("circuits") are a powerful source of quantum pseudorandomness. This gives for the first time a polynomial-time construction of quantum unitary designs, which can replace fully random operations in most applications, and shows that generic quantum dynamics cannot be distinguished from truly random processes. We discuss applications of our result to quantum information science, cryptography, and understanding the self-equilibration of closed quantum dynamics.
Exits in order: How crowding affects particle lifetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penington, Catherine J.; Simpson, Matthew J.; Baker, Ruth E.
2016-06-28
Diffusive processes are often represented using stochastic random walk frameworks. The amount of time taken for an individual in a random walk to intersect with an absorbing boundary is a fundamental property that is often referred to as the particle lifetime, or the first passage time. The mean lifetime of particles in a random walk model of diffusion is related to the amount of time required for the diffusive process to reach a steady state. Mathematical analysis describing the mean lifetime of particles in a standard model of diffusion without crowding is well known. However, the lifetime of agents inmore » a random walk with crowding has received much less attention. Since many applications of diffusion in biology and biophysics include crowding effects, here we study a discrete model of diffusion that incorporates crowding. Using simulations, we show that crowding has a dramatic effect on agent lifetimes, and we derive an approximate expression for the mean agent lifetime that includes crowding effects. Our expression matches simulation results very well, and highlights the importance of crowding effects that are sometimes overlooked.« less
Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.
Vickers, D; Smith, P
1985-01-01
In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.
Naming games in two-dimensional and small-world-connected random geometric networks.
Lu, Qiming; Korniss, G; Szymanski, B K
2008-01-01
We investigate a prototypical agent-based model, the naming game, on two-dimensional random geometric networks. The naming game [Baronchelli, J. Stat. Mech.: Theory Exp. (2006) P06014] is a minimal model, employing local communications that captures the emergence of shared communication schemes (languages) in a population of autonomous semiotic agents. Implementing the naming games with local broadcasts on random geometric graphs, serves as a model for agreement dynamics in large-scale, autonomously operating wireless sensor networks. Further, it captures essential features of the scaling properties of the agreement process for spatially embedded autonomous agents. Among the relevant observables capturing the temporal properties of the agreement process, we investigate the cluster-size distribution and the distribution of the agreement times, both exhibiting dynamic scaling. We also present results for the case when a small density of long-range communication links are added on top of the random geometric graph, resulting in a "small-world"-like network and yielding a significantly reduced time to reach global agreement. We construct a finite-size scaling analysis for the agreement times in this case.
Recognizing pedestrian's unsafe behaviors in far-infrared imagery at night
NASA Astrophysics Data System (ADS)
Lee, Eun Ju; Ko, Byoung Chul; Nam, Jae-Yeal
2016-05-01
Pedestrian behavior recognition is important work for early accident prevention in advanced driver assistance system (ADAS). In particular, because most pedestrian-vehicle crashes are occurred from late of night to early of dawn, our study focus on recognizing unsafe behavior of pedestrians using thermal image captured from moving vehicle at night. For recognizing unsafe behavior, this study uses convolutional neural network (CNN) which shows high quality of recognition performance. However, because traditional CNN requires the very expensive training time and memory, we design the light CNN consisted of two convolutional layers and two subsampling layers for real-time processing of vehicle applications. In addition, we combine light CNN with boosted random forest (Boosted RF) classifier so that the output of CNN is not fully connected with the classifier but randomly connected with Boosted random forest. We named this CNN as randomly connected CNN (RC-CNN). The proposed method was successfully applied to the pedestrian unsafe behavior (PUB) dataset captured from far-infrared camera at night and its behavior recognition accuracy is confirmed to be higher than that of some algorithms related to CNNs, with a shorter processing time.
Experimental Quantum Randomness Processing Using Superconducting Qubits
NASA Astrophysics Data System (ADS)
Yuan, Xiao; Liu, Ke; Xu, Yuan; Wang, Weiting; Ma, Yuwei; Zhang, Fang; Yan, Zhaopeng; Vijay, R.; Sun, Luyan; Ma, Xiongfeng
2016-07-01
Coherently manipulating multipartite quantum correlations leads to remarkable advantages in quantum information processing. A fundamental question is whether such quantum advantages persist only by exploiting multipartite correlations, such as entanglement. Recently, Dale, Jennings, and Rudolph negated the question by showing that a randomness processing, quantum Bernoulli factory, using quantum coherence, is strictly more powerful than the one with classical mechanics. In this Letter, focusing on the same scenario, we propose a theoretical protocol that is classically impossible but can be implemented solely using quantum coherence without entanglement. We demonstrate the protocol by exploiting the high-fidelity quantum state preparation and measurement with a superconducting qubit in the circuit quantum electrodynamics architecture and a nearly quantum-limited parametric amplifier. Our experiment shows the advantage of using quantum coherence of a single qubit for information processing even when multipartite correlation is not present.
Are randomly grown graphs really random?
Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H
2001-10-01
We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.
Exploiting both optical and electrical anisotropy in nanowire electrodes for higher transparency.
Dong, Jianjin; Goldthorpe, Irene A
2018-01-26
Transparent electrodes such as indium tin oxide and random meshes of silver nanowires (AgNWs) have isotropic in-plane properties. However, we show that imparting some alignment to AgNWs can create anisotropic transparency and electrical conductivity characteristics that may benefit many applications. For example, liquid crystal displays and the touch sensors on top of them often only need to be transparent to one type of polarized light as well as predominantly conductive in only one direction. Herein, AgNWs are slightly preferentially aligned during their deposition by rod coating. Compared to randomly oriented AgNW films, the alignment boosts the transparency to perpendicularly polarized light, as well as achieves a higher transparency for a given sheet resistance in one direction compared to randomly oriented AgNWs films. These factors together increase the transparency of a 16 Ω/sq electrode by 7.3 percentage points. The alignment technique is cheap and scalable, compatible with roll-to-roll processes, and most importantly does not require extra processing steps, as rod coating is already a standard process for AgNW electrode fabrication.
On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model
NASA Astrophysics Data System (ADS)
Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin
2018-01-01
We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.
Mid-infrared optical parametric oscillator pumped by an amplified random fiber laser
NASA Astrophysics Data System (ADS)
Shang, Yaping; Shen, Meili; Wang, Peng; Li, Xiao; Xu, Xiaojun
2017-01-01
Recently, the concept of random fiber lasers has attracted a great deal of attention for its feature to generate incoherent light without a traditional laser resonator, which is free of mode competition and insure the stationary narrow-band continuous modeless spectrum. In this Letter, we reported the first, to the best of our knowledge, optical parametric oscillator (OPO) pumped by an amplified 1070 nm random fiber laser (RFL), in order to generate stationary mid-infrared (mid-IR) laser. The experiment realized a watt-level laser output in the mid-IR range and operated relatively stable. The use of the RFL seed source allowed us to take advantage of its respective stable time-domain characteristics. The beam profile, spectrum and time-domain properties of the signal light were measured to analyze the process of frequency down-conversion process under this new pumping condition. The results suggested that the near-infrared (near-IR) signal light `inherited' good beam performances from the pump light. Those would be benefit for further develop about optical parametric process based on different pumping circumstances.
Self-Similar Random Process and Chaotic Behavior In Serrated Flow of High Entropy Alloys
Chen, Shuying; Yu, Liping; Ren, Jingli; Xie, Xie; Li, Xueping; Xu, Ying; Zhao, Guangfeng; Li, Peizhen; Yang, Fuqian; Ren, Yang; Liaw, Peter K.
2016-01-01
The statistical and dynamic analyses of the serrated-flow behavior in the nanoindentation of a high-entropy alloy, Al0.5CoCrCuFeNi, at various holding times and temperatures, are performed to reveal the hidden order associated with the seemingly-irregular intermittent flow. Two distinct types of dynamics are identified in the high-entropy alloy, which are based on the chaotic time-series, approximate entropy, fractal dimension, and Hurst exponent. The dynamic plastic behavior at both room temperature and 200 °C exhibits a positive Lyapunov exponent, suggesting that the underlying dynamics is chaotic. The fractal dimension of the indentation depth increases with the increase of temperature, and there is an inflection at the holding time of 10 s at the same temperature. A large fractal dimension suggests the concurrent nucleation of a large number of slip bands. In particular, for the indentation with the holding time of 10 s at room temperature, the slip process evolves as a self-similar random process with a weak negative correlation similar to a random walk. PMID:27435922
Exploiting both optical and electrical anisotropy in nanowire electrodes for higher transparency
NASA Astrophysics Data System (ADS)
Dong, Jianjin; Goldthorpe, Irene A.
2018-01-01
Transparent electrodes such as indium tin oxide and random meshes of silver nanowires (AgNWs) have isotropic in-plane properties. However, we show that imparting some alignment to AgNWs can create anisotropic transparency and electrical conductivity characteristics that may benefit many applications. For example, liquid crystal displays and the touch sensors on top of them often only need to be transparent to one type of polarized light as well as predominantly conductive in only one direction. Herein, AgNWs are slightly preferentially aligned during their deposition by rod coating. Compared to randomly oriented AgNW films, the alignment boosts the transparency to perpendicularly polarized light, as well as achieves a higher transparency for a given sheet resistance in one direction compared to randomly oriented AgNWs films. These factors together increase the transparency of a 16 Ω/sq electrode by 7.3 percentage points. The alignment technique is cheap and scalable, compatible with roll-to-roll processes, and most importantly does not require extra processing steps, as rod coating is already a standard process for AgNW electrode fabrication.
Semiparametric Bayesian classification with longitudinal markers
De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter
2013-01-01
Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871
Self-similar random process and chaotic behavior in serrated flow of high entropy alloys
Chen, Shuying; Yu, Liping; Ren, Jingli; ...
2016-07-20
Here, the statistical and dynamic analyses of the serrated-flow behavior in the nanoindentation of a high-entropy alloy, Al 0.5CoCrCuFeNi, at various holding times and temperatures, are performed to reveal the hidden order associated with the seemingly-irregular intermittent flow. Two distinct types of dynamics are identified in the high-entropy alloy, which are based on the chaotic time-series, approximate entropy, fractal dimension, and Hurst exponent. The dynamic plastic behavior at both room temperature and 200 °C exhibits a positive Lyapunov exponent, suggesting that the underlying dynamics is chaotic. The fractal dimension of the indentation depth increases with the increase of temperature, andmore » there is an inflection at the holding time of 10 s at the same temperature. A large fractal dimension suggests the concurrent nucleation of a large number of slip bands. In particular, for the indentation with the holding time of 10 s at room temperature, the slip process evolves as a self-similar random process with a weak negative correlation similar to a random walk.« less
NASA Astrophysics Data System (ADS)
An, Suyeong; Kim, Byoungsoo; Lee, Jonghwi
2017-07-01
Porous materials with surprisingly diverse structures have been utilized in nature for many functional purposes. However, the structures and applications of porous man-made polymer materials have been limited by the use of processing techniques involving foaming agents. Herein, we demonstrate for the first time the outstanding hardness and modulus properties of an elastomer that originate from the novel processing approach applied. Polyurethane films of 100-μm thickness with biomimetic ordered porous structures were prepared using directional melt crystallization of a solvent and exhibited hardness and modulus values that were 6.8 and 4.3 times higher than those of the random pore structure, respectively. These values surpass the theoretical prediction of the typical model for porous materials, which works reasonably well for random pores but not for directional pores. Both the ordered and random pore structures exhibited similar porosities and pore sizes, which decreased with increasing solution concentration. This unexpectedly significant improvement of the hardness and modulus could open up new application areas for porous polymeric materials using this relatively novel processing technique.
Self-Similar Random Process and Chaotic Behavior In Serrated Flow of High Entropy Alloys.
Chen, Shuying; Yu, Liping; Ren, Jingli; Xie, Xie; Li, Xueping; Xu, Ying; Zhao, Guangfeng; Li, Peizhen; Yang, Fuqian; Ren, Yang; Liaw, Peter K
2016-07-20
The statistical and dynamic analyses of the serrated-flow behavior in the nanoindentation of a high-entropy alloy, Al0.5CoCrCuFeNi, at various holding times and temperatures, are performed to reveal the hidden order associated with the seemingly-irregular intermittent flow. Two distinct types of dynamics are identified in the high-entropy alloy, which are based on the chaotic time-series, approximate entropy, fractal dimension, and Hurst exponent. The dynamic plastic behavior at both room temperature and 200 °C exhibits a positive Lyapunov exponent, suggesting that the underlying dynamics is chaotic. The fractal dimension of the indentation depth increases with the increase of temperature, and there is an inflection at the holding time of 10 s at the same temperature. A large fractal dimension suggests the concurrent nucleation of a large number of slip bands. In particular, for the indentation with the holding time of 10 s at room temperature, the slip process evolves as a self-similar random process with a weak negative correlation similar to a random walk.
Novel method of extracting motion from natural movies.
Suzuki, Wataru; Ichinohe, Noritaka; Tani, Toshiki; Hayami, Taku; Miyakawa, Naohisa; Watanabe, Satoshi; Takeichi, Hiroshige
2017-11-01
The visual system in primates can be segregated into motion and shape pathways. Interaction occurs at multiple stages along these pathways. Processing of shape-from-motion and biological motion is considered to be a higher-order integration process involving motion and shape information. However, relatively limited types of stimuli have been used in previous studies on these integration processes. We propose a new algorithm to extract object motion information from natural movies and to move random dots in accordance with the information. The object motion information is extracted by estimating the dynamics of local normal vectors of the image intensity projected onto the x-y plane of the movie. An electrophysiological experiment on two adult common marmoset monkeys (Callithrix jacchus) showed that the natural and random dot movies generated with this new algorithm yielded comparable neural responses in the middle temporal visual area. In principle, this algorithm provided random dot motion stimuli containing shape information for arbitrary natural movies. This new method is expected to expand the neurophysiological and psychophysical experimental protocols to elucidate the integration processing of motion and shape information in biological systems. The novel algorithm proposed here was effective in extracting object motion information from natural movies and provided new motion stimuli to investigate higher-order motion information processing. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
The coalescent of a sample from a binary branching process.
Lambert, Amaury
2018-04-25
At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.
Social Noise: Generating Random Numbers from Twitter Streams
NASA Astrophysics Data System (ADS)
Fernández, Norberto; Quintas, Fernando; Sánchez, Luis; Arias, Jesús
2015-12-01
Due to the multiple applications of random numbers in computer systems (cryptography, online gambling, computer simulation, etc.) it is important to have mechanisms to generate these numbers. True Random Number Generators (TRNGs) are commonly used for this purpose. TRNGs rely on non-deterministic sources to generate randomness. Physical processes (like noise in semiconductors, quantum phenomenon, etc.) play this role in state of the art TRNGs. In this paper, we depart from previous work and explore the possibility of defining social TRNGs using the stream of public messages of the microblogging service Twitter as randomness source. Thus, we define two TRNGs based on Twitter stream information and evaluate them using the National Institute of Standards and Technology (NIST) statistical test suite. The results of the evaluation confirm the feasibility of the proposed approach.
Modeling for Ultrasonic Health Monitoring of Foams with Embedded Sensors
NASA Technical Reports Server (NTRS)
Wang, L.; Rokhlin, S. I.; Rokhlin, Stanislav, I.
2005-01-01
In this report analytical and numerical methods are proposed to estimate the effective elastic properties of regular and random open-cell foams. The methods are based on the principle of minimum energy and on structural beam models. The analytical solutions are obtained using symbolic processing software. The microstructure of the random foam is simulated using Voronoi tessellation together with a rate-dependent random close-packing algorithm. The statistics of the geometrical properties of random foams corresponding to different packing fractions have been studied. The effects of the packing fraction on elastic properties of the foams have been investigated by decomposing the compliance into bending and axial compliance components. It is shown that the bending compliance increases and the axial compliance decreases when the packing fraction increases. Keywords: Foam; Elastic properties; Finite element; Randomness
NASA Astrophysics Data System (ADS)
Bisadi, Zahra; Acerbi, Fabio; Fontana, Giorgio; Zorzi, Nicola; Piemonte, Claudio; Pucker, Georg; Pavesi, Lorenzo
2018-02-01
A small-sized photonic quantum random number generator, easy to be implemented in small electronic devices for secure data encryption and other applications, is highly demanding nowadays. Here, we propose a compact configuration with Silicon nanocrystals large area light emitting device (LED) coupled to a Silicon photomultiplier to generate random numbers. The random number generation methodology is based on the photon arrival time and is robust against the non-idealities of the detector and the source of quantum entropy. The raw data show high quality of randomness and pass all the statistical tests in national institute of standards and technology tests (NIST) suite without a post-processing algorithm. The highest bit rate is 0.5 Mbps with the efficiency of 4 bits per detected photon.
Unbiased All-Optical Random-Number Generator
NASA Astrophysics Data System (ADS)
Steinle, Tobias; Greiner, Johannes N.; Wrachtrup, Jörg; Giessen, Harald; Gerhardt, Ilja
2017-10-01
The generation of random bits is of enormous importance in modern information science. Cryptographic security is based on random numbers which require a physical process for their generation. This is commonly performed by hardware random-number generators. These often exhibit a number of problems, namely experimental bias, memory in the system, and other technical subtleties, which reduce the reliability in the entropy estimation. Further, the generated outcome has to be postprocessed to "iron out" such spurious effects. Here, we present a purely optical randomness generator, based on the bistable output of an optical parametric oscillator. Detector noise plays no role and postprocessing is reduced to a minimum. Upon entering the bistable regime, initially the resulting output phase depends on vacuum fluctuations. Later, the phase is rigidly locked and can be well determined versus a pulse train, which is derived from the pump laser. This delivers an ambiguity-free output, which is reliably detected and associated with a binary outcome. The resulting random bit stream resembles a perfect coin toss and passes all relevant randomness measures. The random nature of the generated binary outcome is furthermore confirmed by an analysis of resulting conditional entropies.
NullSeq: A Tool for Generating Random Coding Sequences with Desired Amino Acid and GC Contents.
Liu, Sophia S; Hockenberry, Adam J; Lancichinetti, Andrea; Jewett, Michael C; Amaral, Luís A N
2016-11-01
The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. In order to accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. While many tools have been developed to create random nucleotide sequences, protein coding sequences are subject to a unique set of constraints that complicates the process of generating appropriate null models. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content for the purpose of hypothesis testing. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content, which we have developed into a python package. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. Furthermore, this approach can easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes as well as more effective engineering of biological systems.
NASA Technical Reports Server (NTRS)
Messaro. Semma; Harrison, Phillip
2010-01-01
Ares I Zonal Random vibration environments due to acoustic impingement and combustion processes are develop for liftoff, ascent and reentry. Random Vibration test criteria for Ares I Upper Stage pyrotechnic components are developed by enveloping the applicable zonal environments where each component is located. Random vibration tests will be conducted to assure that these components will survive and function appropriately after exposure to the expected vibration environments. Methodology: Random Vibration test criteria for Ares I Upper Stage pyrotechnic components were desired that would envelope all the applicable environments where each component was located. Applicable Ares I Vehicle drawings and design information needed to be assessed to determine the location(s) for each component on the Ares I Upper Stage. Design and test criteria needed to be developed by plotting and enveloping the applicable environments using Microsoft Excel Spreadsheet Software and documenting them in a report Using Microsoft Word Processing Software. Conclusion: Random vibration liftoff, ascent, and green run design & test criteria for the Upper Stage Pyrotechnic Components were developed by using Microsoft Excel to envelope zonal environments applicable to each component. Results were transferred from Excel into a report using Microsoft Word. After the report is reviewed and edited by my mentor it will be submitted for publication as an attachment to a memorandum. Pyrotechnic component designers will extract criteria from my report for incorporation into the design and test specifications for components. Eventually the hardware will be tested to the environments I developed to assure that the components will survive and function appropriately after exposure to the expected vibration environments.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Gillam, Ronald B.; Loeb, Diane Frome; Hoffman, LaVae M.; Bohman, Thomas; Champlin, Craig A.; Thibodeau, Linda; Widen, Judith; Brandel, Jayne; Friel-Patti, Sandy
2008-01-01
Purpose A randomized controlled trial (RCT) was conducted to compare the language and auditory processing outcomes of children assigned to Fast ForWord-Language (FFW-L) to the outcomes of children assigned to nonspecific or specific language intervention comparison treatments that did not contain modified speech. Method Two hundred and sixteen children between the ages of 6 and 9 years with language impairments were randomly assigned to one of four arms: Fast ForWord-Language (FFW-L), academic enrichment (AE), computer-assisted language intervention (CALI), or individualized language intervention (ILI) provided by a speech-language pathologist. All children received 1 hour and 40 minutes of treatment, 5 days per week, for 6 weeks. Language and auditory processing measures were administered to the children by blinded examiners before treatment, immediately after treatment, 3 months after treatment, and 6 months after treatment. Results The children in all four arms improved significantly on a global language test and a test of backward masking. Children with poor backward masking scores who were randomized to the FFW-L arm did not present greater improvement on the language measures than children with poor backward masking scores who were randomized to the other three arms. Effect sizes, analyses of standard error of measurement, and normalization percentages supported the clinical significance of the improvements on the CASL. There was a treatment effect for the Blending Words subtest on the Comprehensive Test of Phonological Processing (Wagner, Torgesen, & Rashotte, 1999). Participants in the FFW-L and CALI arms earned higher phonological awareness scores than children in the ILI and AE arms at the six-month follow-up testing. Conclusion Fast ForWord-Language, the language intervention that provided modified speech to address a hypothesized underlying auditory processing deficit, was not more effective at improving general language skills or temporal processing skills than a nonspecific comparison treatment (AE) or specific language intervention comparison treatments (CALI and ILI) that did not contain modified speech stimuli. These findings call into question the temporal processing hypothesis of language impairment and the hypothesized benefits of using acoustically modified speech to improve language skills. The finding that children in the three treatment arms and the active comparison arm made clinically relevant gains on measures of language and temporal auditory processing informs our understanding of the variety of intervention activities that can facilitate development. PMID:18230858
NASA Astrophysics Data System (ADS)
Moyer, Steve; Uhl, Elizabeth R.
2015-05-01
For more than 50 years, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has been studying and modeling the human visual discrimination process as it pertains to military imaging systems. In order to develop sensor performance models, human observers are trained to expert levels in the identification of military vehicles. From 1998 until 2006, the experimental stimuli were block randomized, meaning that stimuli with similar difficulty levels (for example, in terms of distance from target, blur, noise, etc.) were presented together in blocks of approximately 24 images but the order of images within the block was random. Starting in 2006, complete randomization came into vogue, meaning that difficulty could change image to image. It was thought that this would provide a more statistically robust result. In this study we investigated the impact of the two types of randomization on performance in two groups of observers matched for skill to create equivalent groups. It is hypothesized that Soldiers in the Complete Randomized condition will have to shift their decision criterion more frequently than Soldiers in the Block Randomization group and this shifting is expected to impede performance so that Soldiers in the Block Randomized group perform better.
Simulation of Stochastic Processes by Coupled ODE-PDE
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A document discusses the emergence of randomness in solutions of coupled, fully deterministic ODE-PDE (ordinary differential equations-partial differential equations) due to failure of the Lipschitz condition as a new phenomenon. It is possible to exploit the special properties of ordinary differential equations (represented by an arbitrarily chosen, dynamical system) coupled with the corresponding Liouville equations (used to describe the evolution of initial uncertainties in terms of joint probability distribution) in order to simulate stochastic processes with the proscribed probability distributions. The important advantage of the proposed approach is that the simulation does not require a random-number generator.
Statistical error model for a solar electric propulsion thrust subsystem
NASA Technical Reports Server (NTRS)
Bantell, M. H.
1973-01-01
The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.
Fractional Diffusion Processes: Probability Distributions and Continuous Time Random Walk
NASA Astrophysics Data System (ADS)
Gorenflo, R.; Mainardi, F.
A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By the space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order alpha in (0,2] and skewness theta (\\verttheta\\vertlemin \\{alpha ,2-alpha \\}), and the first-order time derivative with a Caputo derivative of order beta in (0,1] . The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process. We view it as a generalized diffusion process that we call fractional diffusion process, and present an integral representation of the fundamental solution. A more general approach to anomalous diffusion is however known to be provided by the master equation for a continuous time random walk (CTRW). We show how this equation reduces to our fractional diffusion equation by a properly scaled passage to the limit of compressed waiting times and jump widths. Finally, we describe a method of simulation and display (via graphics) results of a few numerical case studies.
Flaugnacco, Elena; Lopez, Luisa; Terribili, Chiara; Montico, Marcella; Zoia, Stefania; Schön, Daniele
2015-01-01
There is some evidence for a role of music training in boosting phonological awareness, word segmentation, working memory, as well as reading abilities in children with typical development. Poor performance in tasks requiring temporal processing, rhythm perception and sensorimotor synchronization seems to be a crucial factor underlying dyslexia in children. Interestingly, children with dyslexia show deficits in temporal processing, both in language and in music. Within this framework, we test the hypothesis that music training, by improving temporal processing and rhythm abilities, improves phonological awareness and reading skills in children with dyslexia. The study is a prospective, multicenter, open randomized controlled trial, consisting of test, rehabilitation and re-test (ID NCT02316873). After rehabilitation, the music group (N = 24) performed better than the control group (N = 22) in tasks assessing rhythmic abilities, phonological awareness and reading skills. This is the first randomized control trial testing the effect of music training in enhancing phonological and reading abilities in children with dyslexia. The findings show that music training can modify reading and phonological abilities even when these skills are severely impaired. Through the enhancement of temporal processing and rhythmic skills, music might become an important tool in both remediation and early intervention programs. Trial Registration ClinicalTrials.gov NCT02316873 PMID:26407242
Flaugnacco, Elena; Lopez, Luisa; Terribili, Chiara; Montico, Marcella; Zoia, Stefania; Schön, Daniele
2015-01-01
There is some evidence for a role of music training in boosting phonological awareness, word segmentation, working memory, as well as reading abilities in children with typical development. Poor performance in tasks requiring temporal processing, rhythm perception and sensorimotor synchronization seems to be a crucial factor underlying dyslexia in children. Interestingly, children with dyslexia show deficits in temporal processing, both in language and in music. Within this framework, we test the hypothesis that music training, by improving temporal processing and rhythm abilities, improves phonological awareness and reading skills in children with dyslexia. The study is a prospective, multicenter, open randomized controlled trial, consisting of test, rehabilitation and re-test (ID NCT02316873). After rehabilitation, the music group (N = 24) performed better than the control group (N = 22) in tasks assessing rhythmic abilities, phonological awareness and reading skills. This is the first randomized control trial testing the effect of music training in enhancing phonological and reading abilities in children with dyslexia. The findings show that music training can modify reading and phonological abilities even when these skills are severely impaired. Through the enhancement of temporal processing and rhythmic skills, music might become an important tool in both remediation and early intervention programs.Trial Registration: ClinicalTrials.gov NCT02316873
Laser positioning of four-quadrant detector based on pseudo-random sequence
NASA Astrophysics Data System (ADS)
Tang, Yanqin; Cao, Ercong; Hu, Xiaobo; Gu, Guohua; Qian, Weixian
2016-10-01
Nowadays the technology of laser positioning based on four-quadrant detector has the wide scope of the study and application areas. The main principle of laser positioning is that by capturing the projection of the laser spot on the photosensitive surface of the detector, and then calculating the output signal from the detector to obtain the coordinates of the spot on the photosensitive surface of the detector, the coordinate information of the laser spot in the space with respect to detector system which reflects the spatial position of the target object is calculated effectively. Given the extensive application of FPGA technology and the pseudo-random sequence has the similar correlation of white noise, the measurement process of the interference, noise has little effect on the correlation peak. In order to improve anti-jamming capability of the guided missile in tracking process, when the laser pulse emission, the laser pulse period is pseudo-random encoded which maintains in the range of 40ms-65ms so that people of interfering can't find the exact real laser pulse. Also, because the receiver knows the way to solve the pseudo-random code, when the receiver receives two consecutive laser pulses, the laser pulse period can be decoded successfully. In the FPGA hardware implementation process, around each laser pulse arrival time, the receiver can open a wave door to get location information contained the true signal. Taking into account the first two consecutive pulses received have been disturbed, so after receiving the first laser pulse, it receives all the laser pulse in the next 40ms-65ms to obtain the corresponding pseudo-random code.
An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling
NASA Astrophysics Data System (ADS)
Qiu, X. N.; Lau, H. Y. K.
The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.
Afkhamizadeh, Mozhgan; Aboutorabi, Robab; Ravari, Hassan; Fathi Najafi, Mohsen; Ataei Azimi, Sajad; Javadian Langaroodi, Adineh; Yaghoubi, Mohammad Ali; Sahebkar, Amirhossein
2017-08-22
In this randomized controlled trial, diabetic patients with foot ulcers (Wagner grades 1 and 2) were randomly assigned to conventional therapies for diabetic foot ulcer plus topical propolis ointment (5%; twice daily) or conventional therapies alone. The process of ulcer healing was observed during 4 weeks and compared between the two groups regarding the size, erythema, exudates, white blood cell (WBC) count and erythrocyte sedimentation rate (ESR). The process of ulcer size reduction during the four-week period of study was significantly different between the groups. However, this difference was not significant between the third and fourth weeks. There was no significant difference between two groups regarding erythema and exudate reduction as well as WBC count and ESR. Administration of topical propolis ointment in addition to the conventional treatments of diabetic foot ulcer could reduce the size of ulcers with Wagner grades 1 and 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Hagberg, Aric; Hengartner, Nick
We analyze component evolution in general random intersection graphs (RIGs) and give conditions on existence and uniqueness of the giant component. Our techniques generalize the existing methods for analysis on component evolution in RIGs. That is, we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts from the study on component evolution in Erdos-Renyi graphs. The main challenge becomes from the underlying structure of RIGs, when the number of offsprings follows a binomial distribution with a different number of nodes andmore » different rate at each step during the evolution. RIGs can be interpreted as a model for large randomly formed non-metric data sets. Besides the mathematical analysis on component evolution, which we provide in this work, we perceive RIGs as an important random structure which has already found applications in social networks, epidemic networks, blog readership, or wireless sensor networks.« less
Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces
NASA Astrophysics Data System (ADS)
Vacaru, S. I.
2012-03-01
We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.
NASA Astrophysics Data System (ADS)
Bouleau, Nicolas; Chorro, Christophe
2017-08-01
In this paper we consider some elementary and fair zero-sum games of chance in order to study the impact of random effects on the wealth distribution of N interacting players. Even if an exhaustive analytical study of such games between many players may be tricky, numerical experiments highlight interesting asymptotic properties. In particular, we emphasize that randomness plays a key role in concentrating wealth in the extreme, in the hands of a single player. From a mathematical perspective, we interestingly adopt some diffusion limits for small and high-frequency transactions which are otherwise extensively used in population genetics. Finally, the impact of small tax rates on the preceding dynamics is discussed for several regulation mechanisms. We show that taxation of income is not sufficient to overcome this extreme concentration process in contrast to the uniform taxation of capital which stabilizes the economy and prevents agents from being ruined.
Sweeney, Dean; Quinlan, Leo R; OLaighin, Gearoid
2015-08-01
The use of NMES has evolved over the last five decades. Technological advancements have transformed these once complex systems into user-friendly devices with enhanced control functions, leading to new applications of NMES being investigated. The use of Randomized Control Trial (RCT) methodology in evaluating the effectiveness of new and existing applications of NMES is a demanding process adding time and cost to a translation into clinical practice. Poor quality trials may result in poor evidence of NMES effectiveness. In this paper some of the key challenges encountered in NMES clinical trials are identified with the aim of purposing a solution to address these challenges through the adoption of Smartphone technology. The design and evaluation of a smartphone application to provide automatic blind randomization control and facilitating the wireless temporal control of a portable Bluetooth enabled NMES is presented.
A randomized trial comparing concise and standard consent forms in the START trial
Touloumi, Giota; Walker, A. Sarah; Smolskis, Mary; Sharma, Shweta; Babiker, Abdel G.; Pantazis, Nikos; Tavel, Jorge; Florence, Eric; Sanchez, Adriana; Hudson, Fleur; Papadopoulos, Antonios; Emanuel, Ezekiel; Clewett, Megan; Munroe, David; Denning, Eileen
2017-01-01
Background Improving the effectiveness and efficiency of research informed consent is a high priority. Some express concern about longer, more complex, written consent forms creating barriers to participant understanding. A recent meta-analysis concluded that randomized comparisons were needed. Methods We conducted a cluster-randomized non-inferiority comparison of a standard versus concise consent form within a multinational trial studying the timing of starting antiretroviral therapy in HIV+ adults (START). Interested sites were randomized to standard or concise consent forms for all individuals signing START consent. Participants completed a survey measuring comprehension of study information and satisfaction with the consent process. Site personnel reported usual site consent practices. The primary outcome was comprehension of the purpose of randomization (pre-specified 7.5% non-inferiority margin). Results 77 sites (2429 participants) were randomly allocated to use standard consent and 77 sites (2000 participants) concise consent, for an evaluable cohort of 4229. Site and participant characteristics were similar for the two groups. The concise consent was non-inferior to the standard consent on comprehension of randomization (80.2% versus 82%, site adjusted difference: 0.75% (95% CI -3.8%, +5.2%)); and the two groups did not differ significantly on total comprehension score, satisfaction, or voluntariness (p>0.1). Certain independent factors, such as education, influenced comprehension and satisfaction but not differences between consent groups. Conclusions An easier to read, more concise consent form neither hindered nor improved comprehension of study information nor satisfaction with the consent process among a large number of participants. This supports continued efforts to make consent forms more efficient. Trial registration Informed consent substudy was registered as part of START study in clinicaltrials.gov #NCT00867048, and EudraCT # 2008-006439-12 PMID:28445471
NASA Astrophysics Data System (ADS)
Zaburdaev, V.; Denisov, S.; Klafter, J.
2015-04-01
Random walk is a fundamental concept with applications ranging from quantum physics to econometrics. Remarkably, one specific model of random walks appears to be ubiquitous across many fields as a tool to analyze transport phenomena in which the dispersal process is faster than dictated by Brownian diffusion. The Lévy-walk model combines two key features, the ability to generate anomalously fast diffusion and a finite velocity of a random walker. Recent results in optics, Hamiltonian chaos, cold atom dynamics, biophysics, and behavioral science demonstrate that this particular type of random walk provides significant insight into complex transport phenomena. This review gives a self-consistent introduction to Lévy walks, surveys their existing applications, including latest advances, and outlines further perspectives.
Quantum random bit generation using energy fluctuations in stimulated Raman scattering.
Bustard, Philip J; England, Duncan G; Nunn, Josh; Moffatt, Doug; Spanner, Michael; Lausten, Rune; Sussman, Benjamin J
2013-12-02
Random number sequences are a critical resource in modern information processing systems, with applications in cryptography, numerical simulation, and data sampling. We introduce a quantum random number generator based on the measurement of pulse energy quantum fluctuations in Stokes light generated by spontaneously-initiated stimulated Raman scattering. Bright Stokes pulse energy fluctuations up to five times the mean energy are measured with fast photodiodes and converted to unbiased random binary strings. Since the pulse energy is a continuous variable, multiple bits can be extracted from a single measurement. Our approach can be generalized to a wide range of Raman active materials; here we demonstrate a prototype using the optical phonon line in bulk diamond.
A Randomized Controlled Trial of an Electronic Informed Consent Process
Rothwell, Erin; Wong, Bob; Rose, Nancy C.; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A.; Botkin, Jeffrey R.
2018-01-01
A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. PMID:25747685
Online Process Scaffolding and Students' Self-Regulated Learning with Hypermedia.
ERIC Educational Resources Information Center
Azevedo, Roger; Cromley, Jennifer G.; Thomas, Leslie; Seibert, Diane; Tron, Myriam
This study examined the role of different scaffolding instructional interventions in facilitating students' shift to more sophisticated mental models as indicated by both performance and process data. Undergraduate students (n=53) were randomly assigned to 1 of 3 scaffolding conditions (adaptive content and process scaffolding (ACPS), adaptive…
The Effects of Levels of Elaboration on Learners' Strategic Processing of Text
ERIC Educational Resources Information Center
Dornisch, Michele; Sperling, Rayne A.; Zeruth, Jill A.
2011-01-01
In the current work, we examined learners' comprehension when engaged with elaborative processing strategies. In Experiment 1, we randomly assigned students to one of five elaborative processing conditions and addressed differences in learners' lower- and higher-order learning outcomes and ability to employ elaborative strategies. Findings…
Perceptions of Randomness: Why Three Heads Are Better than Four
ERIC Educational Resources Information Center
Hahn, Ulrike; Warren, Paul A.
2009-01-01
A long tradition of psychological research has lamented the systematic errors and biases in people's perception of the characteristics of sequences generated by a random mechanism such as a coin toss. It is proposed that once the likely nature of people's actual experience of such processes is taken into account, these "errors" and "biases"…
Predicting bending stiffness of randomly oriented hybrid panels
Laura Moya; William T.Y. Tze; Jerrold E. Winandy
2010-01-01
This study was conducted to develop a simple model to predict the bending modulus of elasticity (MOE) of randomly oriented hybrid panels. The modeling process involved three modules: the behavior of a single layer was computed by applying micromechanics equations, layer properties were adjusted for densification effects, and the entire panel was modeled as a three-...
ERIC Educational Resources Information Center
Fraser, Mark W.; Day, Steven H.; Galinsky, Maeda J.; Hodges, Vanessa G.; Smokowski, Paul R.
2004-01-01
This article discusses the effectiveness of a multicomponent intervention designed to disrupt developmental processes associated with conduct problems and peer rejection in childhood. Compared with 41 children randomized to a wait list control condition, 45 children in an intervention condition received a social skills training program. At the…
A stochastic maximum principle for backward control systems with random default time
NASA Astrophysics Data System (ADS)
Shen, Yang; Kuen Siu, Tak
2013-05-01
This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.
ERIC Educational Resources Information Center
Grommon, Eric; Davidson, William S., II; Bynum, Timothy S.
2013-01-01
Prisoner reentry programs continue to be developed and implemented to ease the process of transition into the community and to curtail fiscal pressures. This study describes and provides relapse and recidivism outcome findings related to a randomized trial evaluating a multimodal, community-based reentry program that prioritized substance abuse…
Generating Random Numbers by Means of Nonlinear Dynamic Systems
ERIC Educational Resources Information Center
Zang, Jiaqi; Hu, Haojie; Zhong, Juhua; Luo, Duanbin; Fang, Yi
2018-01-01
To introduce the randomness of a physical process to students, a chaotic pendulum experiment was opened in East China University of Science and Technology (ECUST) on the undergraduate level in the physics department. It was shown chaotic motion could be initiated through adjusting the operation of a chaotic pendulum. By using the data of the…
ERIC Educational Resources Information Center
Huang, Hung-Yu; Wang, Wen-Chung
2014-01-01
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
ERIC Educational Resources Information Center
Kamienkowski, Juan E.; Pashler, Harold; Dehaene, Stanislas; Sigman, Mariano
2011-01-01
Does extensive practice reduce or eliminate central interference in dual-task processing? We explored the reorganization of task architecture with practice by combining interference analysis (delays in dual-task experiment) and random-walk models of decision making (measuring the decision and non-decision contributions to RT). The main delay…
Improving the Management Style of School Principals: Results from a Randomized Trial
ERIC Educational Resources Information Center
Lassibille, Gérard
2016-01-01
Using information from a randomized experiment carried out over the course of two school years in Madagascar, this paper evaluates the impact of specific actions designed to streamline and tighten the work processes of public primary school directors. The results show that interventions at the school level, reinforced by interventions at the…
Random Qualitative Validation: A Mixed-Methods Approach to Survey Validation
ERIC Educational Resources Information Center
Van Duzer, Eric
2012-01-01
The purpose of this paper is to introduce the process and value of Random Qualitative Validation (RQV) in the development and interpretation of survey data. RQV is a method of gathering clarifying qualitative data that improves the validity of the quantitative analysis. This paper is concerned with validity in relation to the participants'…
NASA Astrophysics Data System (ADS)
Alifu, Xiafukaiti; Ziqi, Peng; Shiina, Tatsuo
2018-04-01
Non-diffracting beam (NDB) is useful in lidar transmitter because of its high propagation efficiency and high resolution. We aimed to generate NDB in random media such as haze and cloud. The laboratory experiment was conducted with diluted processed milk (fat: 1.8%, 1.1μmφ). Narrow view angle detector of 5.5mrad was used to detect the forward scattering waveform. We obtained the central peak of NDB at the propagation distance of 5cm 30cm in random media by adjusting the concentration of <10%.
NASA Technical Reports Server (NTRS)
Kaljurand, M.; Valentin, J. R.; Shao, M.
1996-01-01
Two alternative input sequences are commonly employed in correlation chromatography (CC). They are sequences derived according to the algorithm of the feedback shift register (i.e., pseudo random binary sequences (PRBS)) and sequences derived by using the uniform random binary sequences (URBS). These two sequences are compared. By applying the "cleaning" data processing technique to the correlograms that result from these sequences, we show that when the PRBS is used the S/N of the correlogram is much higher than the one resulting from using URBS.
Newton, J Stephen; Horner, Robert H; Algozzine, Bob; Todd, Anne W; Algozzine, Kate
2012-08-01
Members of Positive Behavior Interventions and Supports (PBIS) teams from 34 elementary schools participated in a Team-Initiated Problem Solving (TIPS) Workshop and follow-up technical assistance. Within the context of a randomized wait-list controlled trial, team members who were the first recipients of the TIPS intervention demonstrated greater implementation integrity in using the problem-solving processes during their team meetings than did members of PBIS Teams in the Wait-List Control group. The success of TIPS at improving implementation integrity of the problem-solving processes is encouraging and suggests the value of conducting additional research focused on determining whether there is a functional relation between use of these problem-solving processes and actual resolution of targeted student academic and social problems. Copyright © 2012 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Li, Jie; Liang, Xinhua; Liou, Frank; Park, Jonghyun
2018-01-30
This paper presents a new concept for making battery electrodes that can simultaneously control macro-/micro-structures and help address current energy storage technology gaps and future energy storage requirements. Modern batteries are fabricated in the form of laminated structures that are composed of randomly mixed constituent materials. This randomness in conventional methods can provide a possibility of developing new breakthrough processing techniques to build well-organized structures that can improve battery performance. In the proposed processing, an electric field (EF) controls the microstructures of manganese-based electrodes, while additive manufacturing controls macro-3D structures and the integration of both scales. The synergistic control of micro-/macro-structures is a novel concept in energy material processing that has considerable potential for providing unprecedented control of electrode structures, thereby enhancing performance. Electrochemical tests have shown that these new electrodes exhibit superior performance in their specific capacity, areal capacity, and life cycle.
Producing a functional eukaryotic messenger RNA (mRNA) requires the coordinated activity of several large protein complexes to initiate transcription, elongate nascent transcripts, splice together exons, and cleave and polyadenylate the 3’ end. Kinetic competition between these various processes has been proposed to regulate mRNA maturation, but this model could lead to multiple, randomly determined, or stochastic, pathways or outcomes. Regulatory checkpoints have been suggested as a means of ensuring quality control. However, current methods have been unable to tease apart the contributions of these processes at a single gene or on a time scale that could provide mechanistic insight. To begin to investigate the kinetic relationship between transcription and splicing, Daniel Larson, Ph.D., of CCR’s Laboratory of Receptor Biology and Gene Expression, and his colleagues employed a single-molecule RNA imaging approach to monitor production and processing of a human β-globin reporter gene in living cells.
The effect of the neural activity on topological properties of growing neural networks.
Gafarov, F M; Gafarova, V R
2016-09-01
The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.
Random walks on activity-driven networks with attractiveness
NASA Astrophysics Data System (ADS)
Alessandretti, Laura; Sun, Kaiyuan; Baronchelli, Andrea; Perra, Nicola
2017-05-01
Virtually all real-world networks are dynamical entities. In social networks, the propensity of nodes to engage in social interactions (activity) and their chances to be selected by active nodes (attractiveness) are heterogeneously distributed. Here, we present a time-varying network model where each node and the dynamical formation of ties are characterized by these two features. We study how these properties affect random-walk processes unfolding on the network when the time scales describing the process and the network evolution are comparable. We derive analytical solutions for the stationary state and the mean first-passage time of the process, and we study cases informed by empirical observations of social networks. Our work shows that previously disregarded properties of real social systems, such as heterogeneous distributions of activity and attractiveness as well as the correlations between them, substantially affect the dynamical process unfolding on the network.
Long-term persistence of solar activity
NASA Technical Reports Server (NTRS)
Ruzmaikin, Alexander; Feynman, Joan; Robinson, Paul
1994-01-01
We examine the question of whether or not the non-periodic variations in solar activity are caused by a white-noise, random process. The Hurst exponent, which characterizes the persistence of a time series, is evaluated for the series of C-14 data for the time interval from about 6000 BC to 1950 AD. We find a constant Hurst exponent, suggesting that solar activity in the frequency range from 100 to 3000 years includes an important continuum component in addition to the well-known periodic variations. The value we calculate, H approximately 0.8, is significantly larger than the value of 0.5 that would correspond to variations produced by a white-noise process. This value is in good agreement with the results for the monthly sunspot data reported elsewhere, indicating that the physics that produces the continuum is a correlated random process and that it is the same type of process over a wide range of time interval lengths.
Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2018-01-01
A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.
Shaping the spectrum of random-phase radar waveforms
Doerry, Armin W.; Marquette, Brandeis
2017-05-09
The various technologies presented herein relate to generation of a desired waveform profile in the form of a spectrum of apparently random noise (e.g., white noise or colored noise), but with precise spectral characteristics. Hence, a waveform profile that could be readily determined (e.g., by a spoofing system) is effectively obscured. Obscuration is achieved by dividing the waveform into a series of chips, each with an assigned frequency, wherein the sequence of chips are subsequently randomized. Randomization can be a function of the application of a key to the chip sequence. During processing of the echo pulse, a copy of the randomized transmitted pulse is recovered or regenerated against which the received echo is correlated. Hence, with the echo energy range-compressed in this manner, it is possible to generate a radar image with precise impulse response.
Is the Non-Dipole Magnetic Field Random?
NASA Technical Reports Server (NTRS)
Walker, Andrew D.; Backus, George E.
1996-01-01
Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.
640-Gbit/s fast physical random number generation using a broadband chaotic semiconductor laser
NASA Astrophysics Data System (ADS)
Zhang, Limeng; Pan, Biwei; Chen, Guangcan; Guo, Lu; Lu, Dan; Zhao, Lingjuan; Wang, Wei
2017-04-01
An ultra-fast physical random number generator is demonstrated utilizing a photonic integrated device based broadband chaotic source with a simple post data processing method. The compact chaotic source is implemented by using a monolithic integrated dual-mode amplified feedback laser (AFL) with self-injection, where a robust chaotic signal with RF frequency coverage of above 50 GHz and flatness of ±3.6 dB is generated. By using 4-least significant bits (LSBs) retaining from the 8-bit digitization of the chaotic waveform, random sequences with a bit-rate up to 640 Gbit/s (160 GS/s × 4 bits) are realized. The generated random bits have passed each of the fifteen NIST statistics tests (NIST SP800-22), indicating its randomness for practical applications.
The coalescent process in models with selection and recombination.
Hudson, R R; Kaplan, N L
1988-11-01
The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.
Nouchi, Rui; Saito, Toshiki; Nouchi, Haruka; Kawashima, Ryuta
2016-01-01
Background: Processing speed training using a 1-year intervention period improves cognitive functions and emotional states of elderly people. Nevertheless, it remains unclear whether short-term processing speed training such as 4 weeks can benefit elderly people. This study was designed to investigate effects of 4 weeks of processing speed training on cognitive functions and emotional states of elderly people. Methods: We used a single-blinded randomized control trial (RCT). Seventy-two older adults were assigned randomly to two groups: a processing speed training game (PSTG) group and knowledge quiz training game (KQTG) group, an active control group. In PSTG, participants were asked to play PSTG (12 processing speed games) for 15 min, during five sessions per week, for 4 weeks. In the KQTG group, participants were asked to play KQTG (four knowledge quizzes) for 15 min, during five sessions per week, for 4 weeks. We measured several cognitive functions and emotional states before and after the 4 week intervention period. Results: Our results revealed that PSTG improved performances in processing speed and inhibition compared to KQTG, but did not improve performance in reasoning, shifting, short term/working memory, and episodic memory. Moreover, PSTG reduced the depressive mood score as measured by the Profile of Mood State compared to KQTG during the 4 week intervention period, but did not change other emotional measures. Discussion: This RCT first provided scientific evidence related to small acute benefits of 4 week PSTG on processing speed, inhibition, and depressive mood in healthy elderly people. We discuss possible mechanisms for improvements in processing speed and inhibition and reduction of the depressive mood. Trial registration: This trial was registered in The University Hospital Medical Information Network Clinical Trials Registry (UMIN000022250). PMID:28066229
Nouchi, Rui; Saito, Toshiki; Nouchi, Haruka; Kawashima, Ryuta
2016-01-01
Background: Processing speed training using a 1-year intervention period improves cognitive functions and emotional states of elderly people. Nevertheless, it remains unclear whether short-term processing speed training such as 4 weeks can benefit elderly people. This study was designed to investigate effects of 4 weeks of processing speed training on cognitive functions and emotional states of elderly people. Methods: We used a single-blinded randomized control trial (RCT). Seventy-two older adults were assigned randomly to two groups: a processing speed training game (PSTG) group and knowledge quiz training game (KQTG) group, an active control group. In PSTG, participants were asked to play PSTG (12 processing speed games) for 15 min, during five sessions per week, for 4 weeks. In the KQTG group, participants were asked to play KQTG (four knowledge quizzes) for 15 min, during five sessions per week, for 4 weeks. We measured several cognitive functions and emotional states before and after the 4 week intervention period. Results: Our results revealed that PSTG improved performances in processing speed and inhibition compared to KQTG, but did not improve performance in reasoning, shifting, short term/working memory, and episodic memory. Moreover, PSTG reduced the depressive mood score as measured by the Profile of Mood State compared to KQTG during the 4 week intervention period, but did not change other emotional measures. Discussion: This RCT first provided scientific evidence related to small acute benefits of 4 week PSTG on processing speed, inhibition, and depressive mood in healthy elderly people. We discuss possible mechanisms for improvements in processing speed and inhibition and reduction of the depressive mood. Trial registration: This trial was registered in The University Hospital Medical Information Network Clinical Trials Registry (UMIN000022250).
Information-based models for finance and insurance
NASA Astrophysics Data System (ADS)
Hoyle, Edward
2010-10-01
In financial markets, the information that traders have about an asset is reflected in its price. The arrival of new information then leads to price changes. The `information-based framework' of Brody, Hughston and Macrina (BHM) isolates the emergence of information, and examines its role as a driver of price dynamics. This approach has led to the development of new models that capture a broad range of price behaviour. This thesis extends the work of BHM by introducing a wider class of processes for the generation of the market filtration. In the BHM framework, each asset is associated with a collection of random cash flows. The asset price is the sum of the discounted expectations of the cash flows. Expectations are taken with respect (i) an appropriate measure, and (ii) the filtration generated by a set of so-called information processes that carry noisy or imperfect market information about the cash flows. To model the flow of information, we introduce a class of processes termed Lévy random bridges (LRBs), generalising the Brownian and gamma information processes of BHM. Conditioned on its terminal value, an LRB is identical in law to a Lévy bridge. We consider in detail the case where the asset generates a single cash flow X_T at a fixed date T. The flow of information about X_T is modelled by an LRB with random terminal value X_T. An explicit expression for the price process is found by working out the discounted conditional expectation of X_T with respect to the natural filtration of the LRB. New models are constructed using information processes related to the Poisson process, the Cauchy process, the stable-1/2 subordinator, the variance-gamma process, and the normal inverse-Gaussian process. These are applied to the valuation of credit-risky bonds, vanilla and exotic options, and non-life insurance liabilities.
Determining the Number of Clusters in a Data Set Without Graphical Interpretation
NASA Technical Reports Server (NTRS)
Aguirre, Nathan S.; Davies, Misty D.
2011-01-01
Cluster analysis is a data mining technique that is meant ot simplify the process of classifying data points. The basic clustering process requires an input of data points and the number of clusters wanted. The clustering algorithm will then pick starting C points for the clusters, which can be either random spatial points or random data points. It then assigns each data point to the nearest C point where "nearest usually means Euclidean distance, but some algorithms use another criterion. The next step is determining whether the clustering arrangement this found is within a certain tolerance. If it falls within this tolerance, the process ends. Otherwise the C points are adjusted based on how many data points are in each cluster, and the steps repeat until the algorithm converges,
NASA Astrophysics Data System (ADS)
do Lago, Naydson Emmerson S. P.; Kardec Barros, Allan; Sousa, Nilviane Pires S.; Junior, Carlos Magno S.; Oliveira, Guilherme; Guimares Polisel, Camila; Eder Carvalho Santana, Ewaldo
2018-01-01
This study aims to develop an algorithm of an adaptive filter to determine the percentage of body fat based on the use of anthropometric indicators in adolescents. Measurements such as body mass, height and waist circumference were collected for a better analysis. The development of this filter was based on the Wiener filter, used to produce an estimate of a random process. The Wiener filter minimizes the mean square error between the estimated random process and the desired process. The LMS algorithm was also studied for the development of the filter because it is important due to its simplicity and facility of computation. Excellent results were obtained with the filter developed, being these results analyzed and compared with the data collected.
NASA Astrophysics Data System (ADS)
Itoh, Kosuke; Nakada, Tsutomu
2013-04-01
Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.
A Metacommunity Framework for Enhancing the Effectiveness of Biological Monitoring Strategies
Roque, Fabio O.; Cottenie, Karl
2012-01-01
Because of inadequate knowledge and funding, the use of biodiversity indicators is often suggested as a way to support management decisions. Consequently, many studies have analyzed the performance of certain groups as indicator taxa. However, in addition to knowing whether certain groups can adequately represent the biodiversity as a whole, we must also know whether they show similar responses to the main structuring processes affecting biodiversity. Here we present an application of the metacommunity framework for evaluating the effectiveness of biodiversity indicators. Although the metacommunity framework has contributed to a better understanding of biodiversity patterns, there is still limited discussion about its implications for conservation and biomonitoring. We evaluated the effectiveness of indicator taxa in representing spatial variation in macroinvertebrate community composition in Atlantic Forest streams, and the processes that drive this variation. We focused on analyzing whether some groups conform to environmental processes and other groups are more influenced by spatial processes, and on how this can help in deciding which indicator group or groups should be used. We showed that a relatively small subset of taxa from the metacommunity would represent 80% of the variation in community composition shown by the entire metacommunity. Moreover, this subset does not have to be composed of predetermined taxonomic groups, but rather can be defined based on random subsets. We also found that some random subsets composed of a small number of genera performed better in responding to major environmental gradients. There were also random subsets that seemed to be affected by spatial processes, which could indicate important historical processes. We were able to integrate in the same theoretical and practical framework, the selection of biodiversity surrogates, indicators of environmental conditions, and more importantly, an explicit integration of environmental and spatial processes into the selection approach. PMID:22937068
Optimizing Urine Processing Protocols for Protein and Metabolite Detection.
Siddiqui, Nazema Y; DuBois, Laura G; St John-Williams, Lisa; Will, Thompson J; Grenier, Carole; Burke, Emily; Fraser, Matthew O; Amundsen, Cindy L; Murphy, Susan K
In urine, factors such as timing of voids, and duration at room temperature (RT) may affect the quality of recovered protein and metabolite data. Additives may aid with detection, but can add more complexity in sample collection or analysis. We aimed to identify the optimal urine processing protocol for clinically-obtained urine samples that allows for the highest protein and metabolite yields with minimal degradation. Healthy women provided multiple urine samples during the same day. Women collected their first morning (1 st AM) void and another "random void". Random voids were aliquotted with: 1) no additive; 2) boric acid (BA); 3) protease inhibitor (PI); or 4) both BA + PI. Of these aliquots, some were immediately stored at 4°C, and some were left at RT for 4 hours. Proteins and individual metabolites were quantified, normalized to creatinine concentrations, and compared across processing conditions. Sample pools corresponding to each processing condition were analyzed using mass spectrometry to assess protein degradation. Ten Caucasian women between 35-65 years of age provided paired 1 st morning and random voided urine samples. Normalized protein concentrations were slightly higher in 1 st AM compared to random "spot" voids. The addition of BA did not significantly change proteins, while PI significantly improved normalized protein concentrations, regardless of whether samples were immediately cooled or left at RT for 4 hours. In pooled samples, there were minimal differences in protein degradation under the various conditions we tested. In metabolite analyses, there were significant differences in individual amino acids based on the timing of the void. For comparative translational research using urine, information about void timing should be collected and standardized. For urine samples processed in the same day, BA does not appear to be necessary while the addition of PI enhances protein yields, regardless of 4°C or RT storage temperature.
Benford's law and continuous dependent random variables
NASA Astrophysics Data System (ADS)
Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine
2018-01-01
Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.
Recruitment and accrual of women in a placebo-controlled clinical pilot study on manual therapy.
Cambron, Jerrilyn A; Hawk, Cheryl; Evans, Roni; Long, Cynthia R
2004-06-01
To investigate the accrual rates and recruitment processes among 3 Midwestern sites during a pilot study on manual therapy for chronic pelvic pain. Multisite pilot study for a randomized, placebo-controlled clinical trial. Three chiropractic institutions in or near major metropolitan cities in the Midwestern United States. Thirty-nine women aged 18 to 45 with chronic pelvic pain of at least 6 months duration, diagnosed by a board certified gynecologist. The method of recruitment was collected for each individual who responded to an advertisement and completed an interviewer-administered telephone screen. Participants who were willing and eligible after 3 baseline visits were entered into a randomized clinical trial. The number of responses and accrual rates were determined for the overall study, each of the 3 treatment sites, and each of the 5 recruitment efforts. In this study, 355 women were screened over the telephone and 39 were randomized, making the rate of randomization approximately 10%. The most effective recruitment methods leading to randomization were direct mail (38%) and radio advertisements (34%). However, success of the recruitment process differed by site. Based on the accrual of this multisite pilot study, a full-scale trial would not be feasible using this study's parameters. However, useful information was gained on recruitment effectiveness, eligibility criteria, and screening protocols among the 3 metropolitan sites.
The Effects of Test Trial and Processing Level on Immediate and Delayed Retention
ERIC Educational Resources Information Center
Chang, Sau Hou
2013-01-01
The purpose of the present study was to investigate the effects of test trial and processing level on immediate and delayed retention. Seventy-six college students were randomly assigned first to the single test and the repeated test trials, and then to the shallow processing level and the deep processing level to study forty stimulus words.…
47 CFR 1.926 - Application processing; initial procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Application processing; initial procedures. 1.926 Section 1.926 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings Application Requirements...
47 CFR 1.926 - Application processing; initial procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Application processing; initial procedures. 1.926 Section 1.926 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings Application Requirements...
47 CFR 1.926 - Application processing; initial procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Application processing; initial procedures. 1.926 Section 1.926 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings Application Requirements...
Enhancing superconducting critical current by randomness
Wang, Y. L.; Thoutam, L. R.; Xiao, Z. L.; ...
2016-01-11
The key ingredient of high critical currents in a type-II superconductor is defect sites that pin vortices. Here, we demonstrate that a random pinscape, an overlooked pinning system in nanopatterned superconductors, can lead to a substantially larger critical current enhancement at high magnetic fields than an ordered array of vortex pin sites. We reveal that the better performance of a random pinscape is due to the variation of the local density of its pinning sites, which mitigates the motion of vortices. This is confirmed by achieving even higher enhancement of the critical current through a conformally mapped random pinscape, wheremore » the distribution of the local density of pinning sites is further enlarged. Our findings highlight the potential of random pinscapes in enhancing the superconducting critical currents of applied superconductors in which random pin sites of nanoscale defects emerging in the materials synthesis process or through ex-situ irradiation are the only practical choice for large-scale production. Our results may also stimulate research on effects of a random pinscape in other complementary systems such as colloidal crystals, Bose-Einstein condensates, and Luttinger liquids.« less
Physically Unclonable Cryptographic Primitives by Chemical Vapor Deposition of Layered MoS2.
Alharbi, Abdullah; Armstrong, Darren; Alharbi, Somayah; Shahrjerdi, Davood
2017-12-26
Physically unclonable cryptographic primitives are promising for securing the rapidly growing number of electronic devices. Here, we introduce physically unclonable primitives from layered molybdenum disulfide (MoS 2 ) by leveraging the natural randomness of their island growth during chemical vapor deposition (CVD). We synthesize a MoS 2 monolayer film covered with speckles of multilayer islands, where the growth process is engineered for an optimal speckle density. Using the Clark-Evans test, we confirm that the distribution of islands on the film exhibits complete spatial randomness, hence indicating the growth of multilayer speckles is a spatial Poisson process. Such a property is highly desirable for constructing unpredictable cryptographic primitives. The security primitive is an array of 2048 pixels fabricated from this film. The complex structure of the pixels makes the physical duplication of the array impossible (i.e., physically unclonable). A unique optical response is generated by applying an optical stimulus to the structure. The basis for this unique response is the dependence of the photoemission on the number of MoS 2 layers, which by design is random throughout the film. Using a threshold value for the photoemission, we convert the optical response into binary cryptographic keys. We show that the proper selection of this threshold is crucial for maximizing combination randomness and that the optimal value of the threshold is linked directly to the growth process. This study reveals an opportunity for generating robust and versatile security primitives from layered transition metal dichalcogenides.
Anhøj, Jacob
2015-01-01
Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines “unusually long” or “unusually few”. Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules. PMID:25799549
Probabilistic Estimation of Rare Random Collisions in 3 Space
2009-03-01
extended Poisson process as a feature of probability theory. With the bulk of research in extended Poisson processes going into parame- ter estimation, the...application of extended Poisson processes to spatial processes is largely untouched. Faddy performed a short study of spatial data, but overtly...the theory of extended Poisson processes . To date, the processes are limited in that the rates only depend on the number of arrivals at some time
A Model of the Base Civil Engineering Work Request/Work Order Processing System.
1979-09-01
changes to the work order processing system. This research identifies the variables that significantly affect the accomplishment time and proposes a... order processing system and its behavior with respect to work order processing time. A conceptual model was developed to describe the work request...work order processing system as a stochastic queueing system in which the processing times and the various distributions are treated as random variables
Random walks of colloidal probes in viscoelastic materials
NASA Astrophysics Data System (ADS)
Khan, Manas; Mason, Thomas G.
2014-04-01
To overcome limitations of using a single fixed time step in random walk simulations, such as those that rely on the classic Wiener approach, we have developed an algorithm for exploring random walks based on random temporal steps that are uniformly distributed in logarithmic time. This improvement enables us to generate random-walk trajectories of probe particles that span a highly extended dynamic range in time, thereby facilitating the exploration of probe motion in soft viscoelastic materials. By combining this faster approach with a Maxwell-Voigt model (MVM) of linear viscoelasticity, based on a slowly diffusing harmonically bound Brownian particle, we rapidly create trajectories of spherical probes in soft viscoelastic materials over more than 12 orders of magnitude in time. Appropriate windowing of these trajectories over different time intervals demonstrates that random walk for the MVM is neither self-similar nor self-affine, even if the viscoelastic material is isotropic. We extend this approach to spatially anisotropic viscoelastic materials, using binning to calculate the anisotropic mean square displacements and creep compliances along different orthogonal directions. The elimination of a fixed time step in simulations of random processes, including random walks, opens up interesting possibilities for modeling dynamics and response over a highly extended temporal dynamic range.
True Randomness from Big Data.
Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang
2016-09-26
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
NASA Astrophysics Data System (ADS)
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-09-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-01-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514
ERIC Educational Resources Information Center
Gillam, Ronald B.; Loeb, Diane Frome; Hoffman, LaVae M.; Bohman, Thomas; Champlin, Craig A.; Thibodeau, Linda; Widen, Judith; Brandel, Jayne; Friel-Patti, Sandy
2008-01-01
Purpose: A randomized controlled trial was conducted to compare the language and auditory processing outcomes of children assigned to receive the Fast ForWord Language intervention (FFW-L) with the outcomes of children assigned to nonspecific or specific language intervention comparison treatments that did not contain modified speech. Method: Two…
ERIC Educational Resources Information Center
Garvin-Doxas, Kathy; Klymkowsky, Michael W.
2008-01-01
While researching student assumptions for the development of the Biology Concept Inventory (BCI; http://bioliteracy.net), we found that a wide class of student difficulties in molecular and evolutionary biology appears to be based on deep-seated, and often unaddressed, misconceptions about random processes. Data were based on more than 500…
Random-walk diffusion and drying of porous materials
NASA Astrophysics Data System (ADS)
Mehrafarin, M.; Faghihi, M.
2001-12-01
Based on random-walk diffusion, a microscopic model for drying is proposed to explain the characteristic features of the drying-rate curve of porous materials. The constant drying-rate period is considered as a normal diffusion process. The transition to the falling-rate regime is attributed to the fractal nature of porous materials which results in crossover to anomalous diffusion.
ERIC Educational Resources Information Center
Newman, Michelle G.; Castonguay, Louis G.; Borkovec, Thomas D.; Fisher, Aaron J.; Boswell, James F.; Szkodny, Lauren E.; Nordberg, Samuel S.
2011-01-01
Objective: Recent models suggest that generalized anxiety disorder (GAD) symptoms may be maintained by emotional processing avoidance and interpersonal problems. Method: This is the first randomized controlled trial to test directly whether cognitive-behavioral therapy (CBT) could be augmented with the addition of a module targeting interpersonal…
Characterization and Simulation of Gunfire with Wavelets
Smallwood, David O.
1999-01-01
Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less
Drift as a mechanism for cultural change: an example from baby names.
Hahn, Matthew W; Bentley, R Alexander
2003-01-01
In the social sciences, there is currently no consensus on the mechanism by which cultural elements come and go in human society. For elements that are value-neutral, an appropriate null model may be one of random copying between individuals in the population. We show that the frequency distributions of baby names used in the United States in each decade of the twentieth century, for both males and females, obey a power law that is maintained over 100 years even though the population is growing, names are being introduced and lost every decade and large changes in the frequencies of specific names are common. We show that these distributions are satisfactorily explained by a simple process in which individuals randomly copy names from each other, a process that is analogous to the infinite-allele model of population genetics with random genetic drift. By its simplicity, this model provides a powerful null hypothesis for cultural change. It further explains why a few elements inevitably become highly popular, even if they have no intrinsic superiority over alternatives. Random copying could potentially explain power law distributions in other cultural realms, including the links on the World Wide Web. PMID:12952655
Parallel hyperspectral image reconstruction using random projections
NASA Astrophysics Data System (ADS)
Sevilla, Jorge; Martín, Gabriel; Nascimento, José M. P.
2016-10-01
Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA). Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.
USDA-ARS?s Scientific Manuscript database
Almond processing has been shown to differentially impact metabolizable energy; however, the effect of food form on the gastrointestinal microbiota is under-investigated. We aimed to assess the interrelationship of almond consumption and processing on the gastrointestinal microbiota. A controlled-fe...
Theory-Driven Process Evaluation of a Complementary Feeding Trial in Four Countries
ERIC Educational Resources Information Center
Newman, Jamie E.; Garces, Ana; Mazariegos, Manolo; Hambidge, K. Michael; Manasyan, Albert; Tshefu, Antoinette; Lokangaka, Adrien; Sami, Neelofar; Carlo, Waldemar A.; Bose, Carl L.; Pasha, Omrana; Goco, Norman; Chomba, Elwyn; Goldenberg, Robert L.; Wright, Linda L.; Koso-Thomas, Marion; Krebs, Nancy F.
2014-01-01
We conducted a theory-driven process evaluation of a cluster randomized controlled trial comparing two types of complementary feeding (meat versus fortified cereal) on infant growth in Guatemala, Pakistan, Zambia and the Democratic Republic of Congo. We examined process evaluation indicators for the entire study cohort (N = 1236) using chi-square…
Asymptotics of small deviations of the Bogoliubov processes with respect to a quadratic norm
NASA Astrophysics Data System (ADS)
Pusev, R. S.
2010-10-01
We obtain results on small deviations of Bogoliubov’s Gaussian measure occurring in the theory of the statistical equilibrium of quantum systems. For some random processes related to Bogoliubov processes, we find the exact asymptotic probability of their small deviations with respect to a Hilbert norm.
Differential Effects of General Metacognition and Task-Specific Beliefs on Strategy Use and Recall.
ERIC Educational Resources Information Center
Weed, Keri; And Others
A self-paced free recall task was employed to assess the effects of motivational and metacognitive influences on active processing and recall. A total of 81 fourth-graders were randomly assigned to one of four instructional conditions: strategy instructions plus process monitoring instructions; strategy instructions only; process monitoring…
Multiplicative processes in visual cognition
NASA Astrophysics Data System (ADS)
Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.
2014-03-01
The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.
Knowlden, Adam P; Sharma, Manoj
2014-09-01
Family-and-home-based interventions are an important vehicle for preventing childhood obesity. Systematic process evaluations have not been routinely conducted in assessment of these interventions. The purpose of this study was to plan and conduct a process evaluation of the Enabling Mothers to Prevent Pediatric Obesity Through Web-Based Learning and Reciprocal Determinism (EMPOWER) randomized control trial. The trial was composed of two web-based, mother-centered interventions for prevention of obesity in children between 4 and 6 years of age. Process evaluation used the components of program fidelity, dose delivered, dose received, context, reach, and recruitment. Categorical process evaluation data (program fidelity, dose delivered, dose exposure, and context) were assessed using Program Implementation Index (PII) values. Continuous process evaluation variables (dose satisfaction and recruitment) were assessed using ANOVA tests to evaluate mean differences between groups (experimental and control) and sessions (sessions 1 through 5). Process evaluation results found that both groups (experimental and control) were equivalent, and interventions were administered as planned. Analysis of web-based intervention process objectives requires tailoring of process evaluation models for online delivery. Dissemination of process evaluation results can advance best practices for implementing effective online health promotion programs. © 2014 Society for Public Health Education.
Optimized hardware framework of MLP with random hidden layers for classification applications
NASA Astrophysics Data System (ADS)
Zyarah, Abdullah M.; Ramesh, Abhishek; Merkel, Cory; Kudithipudi, Dhireesha
2016-05-01
Multilayer Perceptron Networks with random hidden layers are very efficient at automatic feature extraction and offer significant performance improvements in the training process. They essentially employ large collection of fixed, random features, and are expedient for form-factor constrained embedded platforms. In this work, a reconfigurable and scalable architecture is proposed for the MLPs with random hidden layers with a customized building block based on CORDIC algorithm. The proposed architecture also exploits fixed point operations for area efficiency. The design is validated for classification on two different datasets. An accuracy of ~ 90% for MNIST dataset and 75% for gender classification on LFW dataset was observed. The hardware has 299 speed-up over the corresponding software realization.
Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn
NASA Astrophysics Data System (ADS)
Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han
2018-06-01
We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors
NASA Astrophysics Data System (ADS)
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Explicit equilibria in a kinetic model of gambling
NASA Astrophysics Data System (ADS)
Bassetti, F.; Toscani, G.
2010-06-01
We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.
A randomized controlled trial of an electronic informed consent process.
Rothwell, Erin; Wong, Bob; Rose, Nancy C; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A; Botkin, Jeffrey R
2014-12-01
A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. © The Author(s) 2014.
Diffusion in randomly perturbed dissipative dynamics
NASA Astrophysics Data System (ADS)
Rodrigues, Christian S.; Chechkin, Aleksei V.; de Moura, Alessandro P. S.; Grebogi, Celso; Klages, Rainer
2014-11-01
Dynamical systems having many coexisting attractors present interesting properties from both fundamental theoretical and modelling points of view. When such dynamics is under bounded random perturbations, the basins of attraction are no longer invariant and there is the possibility of transport among them. Here we introduce a basic theoretical setting which enables us to study this hopping process from the perspective of anomalous transport using the concept of a random dynamical system with holes. We apply it to a simple model by investigating the role of hyperbolicity for the transport among basins. We show numerically that our system exhibits non-Gaussian position distributions, power-law escape times, and subdiffusion. Our simulation results are reproduced consistently from stochastic continuous time random walk theory.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Study on Nonlinear Vibration Analysis of Gear System with Random Parameters
NASA Astrophysics Data System (ADS)
Tong, Cao; Liu, Xiaoyuan; Fan, Li
2018-03-01
In order to study the dynamic characteristics of gear nonlinear vibration system and the influence of random parameters, firstly, a nonlinear stochastic vibration analysis model of gear 3-DOF is established based on Newton’s Law. And the random response of gear vibration is simulated by stepwise integration method. Secondly, the influence of stochastic parameters such as meshing damping, tooth side gap and excitation frequency on the dynamic response of gear nonlinear system is analyzed by using the stability analysis method such as bifurcation diagram and Lyapunov exponent method. The analysis shows that the stochastic process can not be neglected, which can cause the random bifurcation and chaos of the system response. This study will provide important reference value for vibration engineering designers.
Nonlinear Fatigue Damage Model Based on the Residual Strength Degradation Law
NASA Astrophysics Data System (ADS)
Yongyi, Gao; Zhixiao, Su
In this paper, a logarithmic expression to describe the residual strength degradation process is developed in order to fatigue test results for normalized carbon steel. The definition and expression of fatigue damage due to symmetrical stress with a constant amplitude are also given. The expression of fatigue damage can also explain the nonlinear properties of fatigue damage. Furthermore, the fatigue damage of structures under random stress is analyzed, and an iterative formula to describe the fatigue damage process is deduced. Finally, an approximate method for evaluating the fatigue life of structures under repeated random stress blocking is presented through various calculation examples.
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
Heat currents in electronic junctions driven by telegraph noise
NASA Astrophysics Data System (ADS)
Entin-Wohlman, O.; Chowdhury, D.; Aharony, A.; Dattagupta, S.
2017-11-01
The energy and charge fluxes carried by electrons in a two-terminal junction subjected to a random telegraph noise, produced by a single electronic defect, are analyzed. The telegraph processes are imitated by the action of a stochastic electric field that acts on the electrons in the junction. Upon averaging over all random events of the telegraph process, it is found that this electric field supplies, on the average, energy to the electronic reservoirs, which is distributed unequally between them: the stronger is the coupling of the reservoir with the junction, the more energy it gains. Thus the noisy environment can lead to a temperature gradient across an unbiased junction.
Integrated, nonvolatile, high-speed analog random access memory
NASA Technical Reports Server (NTRS)
Katti, Romney R. (Inventor); Wu, Jiin-Chuan (Inventor); Stadler, Henry L. (Inventor)
1994-01-01
This invention provides an integrated, non-volatile, high-speed random access memory. A magnetically switchable ferromagnetic or ferrimagnetic layer is sandwiched between an electrical conductor which provides the ability to magnetize the magnetically switchable layer and a magneto resistive or Hall effect material which allows sensing the magnetic field which emanates from the magnetization of the magnetically switchable layer. By using this integrated three-layer form, the writing process, which is controlled by the conductor, is separated from the storage medium in the magnetic layer and from the readback process which is controlled by the magnetoresistive layer. A circuit for implementing the memory in CMOS or the like is disclosed.
Time reversibility of intracranial human EEG recordings in mesial temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
van der Heyden, M. J.; Diks, C.; Pijn, J. P. M.; Velis, D. N.
1996-02-01
Intracranial electroencephalograms from patients suffering from mesial temporal lobe epilepsy were tested for time reversibility. If the recorded time series is irreversible, the input of the recording system cannot be a realisation of a linear Gaussian random process. We confirmed experimentally that the measurement equipment did not introduce irreversibility in the recorded output when the input was a realisation of a linear Gaussian random process. In general, the non-seizure recordings are reversible, whereas the seizure recordings are irreversible. These results suggest that time reversibility is a useful property for the characterisation of human intracranial EEG recordings in mesial temporal lobe epilepsy.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
2004-01-01
The systems engineering description of a wideband communications channel is provided which is based upon the fundamental propagation aspects of the problem. In particular, the well known time variant description of a channel is formulated from the basic multiple scattering processes that occur in a random propagation medium. Such a connection is required if optimal processing methods are to be applied to mitigate the deleterious random fading and multipathing of the channel. An example is given which demonstrates how the effective bandwidth of the channel is diminished due to atmospheric propagation impairments.
In Darwinian evolution, feedback from natural selection leads to biased mutations.
Caporale, Lynn Helena; Doyle, John
2013-12-01
Natural selection provides feedback through which information about the environment and its recurring challenges is captured, inherited, and accumulated within genomes in the form of variations that contribute to survival. The variation upon which natural selection acts is generally described as "random." Yet evidence has been mounting for decades, from such phenomena as mutation hotspots, horizontal gene transfer, and highly mutable repetitive sequences, that variation is far from the simplifying idealization of random processes as white (uniform in space and time and independent of the environment or context). This paper focuses on what is known about the generation and control of mutational variation, emphasizing that it is not uniform across the genome or in time, not unstructured with respect to survival, and is neither memoryless nor independent of the (also far from white) environment. We suggest that, as opposed to frequentist methods, Bayesian analysis could capture the evolution of nonuniform probabilities of distinct classes of mutation, and argue not only that the locations, styles, and timing of real mutations are not correctly modeled as generated by a white noise random process, but that such a process would be inconsistent with evolutionary theory. © 2013 New York Academy of Sciences.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
NASA Astrophysics Data System (ADS)
Fagg, Roger; Smalley, Ian
2018-04-01
Loess landscapes sometimes contain isolated depressed areas, which often appear as lakes. The outline shape (and distribution) of these depressions could be controlled by random processes, particularly if the depressions are caused by loess hydroconsolidation and ground subsidence. By applying the Zingg system of shape classification it is possible to propose a mean random shape for the closed depressions. A Zingg rectangle with a side ratio of about 2:1 is produced by a very simple Monte Carlo method, which had been used previously to calculate the mean random shape of a loess particle. The Zingg rectangle indicates the basic shape of the mean closed depression. A simple four stage process for the formation of the depressions is proposed. They might be called `Hardcastle Hollows' in honour of John Hardcastle who first reported them, in New Zealand. Studies on Ukrainian deposits suggest that there might be some stratigraphic value in the observation of closed depressions; they are often not superimposed in successive depositions of loess. Hydroconsolidation is important in landscape processes. The hollows provide interesting habitats and enlarge the ecological interest of loess deposits; the geoheritage scene is enhanced.
Using circuit theory to model connectivity in ecology, evolution, and conservation.
McRae, Brad H; Dickson, Brett G; Keitt, Timothy H; Shah, Viral B
2008-10-01
Connectivity among populations and habitats is important for a wide range of ecological processes. Understanding, preserving, and restoring connectivity in complex landscapes requires connectivity models and metrics that are reliable, efficient, and process based. We introduce a new class of ecological connectivity models based in electrical circuit theory. Although they have been applied in other disciplines, circuit-theoretic connectivity models are new to ecology. They offer distinct advantages over common analytic connectivity models, including a theoretical basis in random walk theory and an ability to evaluate contributions of multiple dispersal pathways. Resistance, current, and voltage calculated across graphs or raster grids can be related to ecological processes (such as individual movement and gene flow) that occur across large population networks or landscapes. Efficient algorithms can quickly solve networks with millions of nodes, or landscapes with millions of raster cells. Here we review basic circuit theory, discuss relationships between circuit and random walk theories, and describe applications in ecology, evolution, and conservation. We provide examples of how circuit models can be used to predict movement patterns and fates of random walkers in complex landscapes and to identify important habitat patches and movement corridors for conservation planning.
Dynamic Simulation of Random Packing of Polydispersive Fine Particles
NASA Astrophysics Data System (ADS)
Ferraz, Carlos Handrey Araujo; Marques, Samuel Apolinário
2018-02-01
In this paper, we perform molecular dynamic (MD) simulations to study the two-dimensional packing process of both monosized and random size particles with radii ranging from 1.0 to 7.0 μm. The initial positions as well as the radii of five thousand fine particles were defined inside a rectangular box by using a random number generator. Both the translational and rotational movements of each particle were considered in the simulations. In order to deal with interacting fine particles, we take into account both the contact forces and the long-range dispersive forces. We account for normal and static/sliding tangential friction forces between particles and between particle and wall by means of a linear model approach, while the long-range dispersive forces are computed by using a Lennard-Jones-like potential. The packing processes were studied assuming different long-range interaction strengths. We carry out statistical calculations of the different quantities studied such as packing density, mean coordination number, kinetic energy, and radial distribution function as the system evolves over time. We find that the long-range dispersive forces can strongly influence the packing process dynamics as they might form large particle clusters, depending on the intensity of the long-range interaction strength.
Some Minorants and Majorants of Random Walks and Levy Processes
NASA Astrophysics Data System (ADS)
Abramson, Joshua Simon
This thesis consists of four chapters, all relating to some sort of minorant or majorant of random walks or Levy processes. In Chapter 1 we provide an overview of recent work on descriptions and properties of the convex minorant of random walks and Levy processes as detailed in Chapter 2, [72] and [73]. This work rejuvenated the field of minorants, and led to the work in all the subsequent chapters. The results surveyed include point process descriptions of the convex minorant of random walks and Levy processes on a fixed finite interval, up to an independent exponential time, and in the infinite horizon case. These descriptions follow from the invariance of these processes under an adequate path transformation. In the case of Brownian motion, we note how further special properties of this process, including time-inversion, imply a sequential description for the convex minorant of the Brownian meander. This chapter is based on [3], which was co-written with Jim Pitman, Nathan Ross and Geronimo Uribe Bravo. Chapter 1 serves as a long introduction to Chapter 2, in which we offer a unified approach to the theory of concave majorants of random walks. The reasons for the switch from convex minorants to concave majorants are discussed in Section 1.1, but the results are all equivalent. This unified theory is arrived at by providing a path transformation for a walk of finite length that leaves the law of the walk unchanged whilst providing complete information about the concave majorant - the path transformation is different from the one discussed in Chapter 1, but this is necessary to deal with a more general case than the standard one as done in Section 2.6. The path transformation of Chapter 1, which is discussed in detail in Section 2.8, is more relevant to the limiting results for Levy processes that are of interest in Chapter 1. Our results lead to a description of a walk of random geometric length as a Poisson point process of excursions away from its concave majorant, which is then used to find a complete description of the concave majorant of a walk of infinite length. In the case where subsets of increments may have the same arithmetic mean (the more general case mentioned above), we investigate three nested compositions that naturally arise from our construction of the concave majorant. This chapter is based on [4], which was co-written with Jim Pitman. In Chapter 3, we study the Lipschitz minorant of a Levy process. For alpha > 0, the alpha-Lipschitz minorant of a function f : R→R is the greatest function m : R→R such that m ≤ f and | m(s) - m(t)| ≤ alpha |s - t| for all s, t ∈ R should such a function exist. If X = Xtt∈ R is a real-valued Levy process that is not pure linear drift with slope +/-alpha, then the sample paths of X have an alpha-Lipschitz minorant almost surely if and only if | E [X1]| < alpha. Denoting the minorant by M, we investigate properties of the random closed set Z := {t ∈ R : Mt = {Xt ∧ Xt-}, which, since it is regenerative and stationary, has the distribution of the closed range of some subordinator "made stationary" in a suitable sense. We give conditions for the contact set Z to be countable or to have zero Lebesgue measure, and we obtain formulas that characterize the Levy measure of the associated subordinator. We study the limit of Z as alpha → infinity and find for the so-called abrupt Levy processes introduced by Vigon that this limit is the set of local infima of X. When X is a Brownian motion with drift beta such that |beta| < alpha, we calculate explicitly the densities of various random variables related to the minorant. This chapter is based on [2], which was co-written with Steven N. Evans. Finally, in Chapter 4 we study the structure of the shocks for the inviscid Burgers equation in dimension 1 when the initial velocity is given by Levy noise, or equivalently when the initial potential is a two-sided Levy process This shock structure turns out to give rise to a parabolic minorant of the Levy process--see Section 4.2 for details. The main results are that when psi0 is abrupt in the sense of Vigon or has bounded variation with limsuph-2 h↓0y0 h=infinity , the set of points with zero velocity is regenerative, and that in the latter case this set is equal to the set of Lagrangian regular points, which is non-empty. When psi0 is abrupt the shock structure is discrete and when psi0 is eroded there are no rarefaction intervals. This chapter is based on [1].
Elbers, Nieke A; Akkermans, Arno J; Cuijpers, Pim; Bruinvels, David J
2011-02-02
Research has shown that current claims settlement process can have a negative impact on psychological and physical recovery of personal injury (PI) victims. One of the explanations for the negative impact on health is that the claims settlement process is a stressful experience and victims suffer from renewed victimization caused by the claims settlement process. PI victims can experience a lack of information, lack of involvement, lack of 'voice', and poor communication. We present the first study that aims to empower PI victims with respect to the negative impact of the claims settlement process by means of an internet intervention. The study is a two armed, randomized controlled trial (RCT), in which 170 PI victims are randomized to either the intervention or control group. The intervention group will get access to a website providing 1) an information module, so participants learn what is happening and what to expect during the claims settlement process, and 2) an e-coach module, so participants learn to cope with problems they experience during the claims settlement process. The control group will get access to a website with hyperlinks to commonly available information only. Participants will be recruited via a PI claims settlement office. Participants are included if they have been involved in a traffic accident which happened less than two years ago, and are at least 18 years old.The main study parameter is the increase of empowerment within the intervention group compared to the control group. Empowerment will be measured by the mastery scale and a self-efficacy scale. The secondary outcomes are perceived justice, burden, well being, work ability, knowledge, amount of damages, and lawyer-client communication. Data are collected at baseline (T0 measurement before randomization), at three months, six months, and twelve months after baseline. Analyses will be conducted according to the intention-to-treat principle. This study evaluates the effectiveness of an internet intervention aimed at empowerment of PI victims. The results will give more insight into the impact of compensation proceedings on health over time, and they can have important consequences for legal claims settlement. Strengths and limitations of this study are discussed. Netherlands Trial Register NTR2360.
Assessment of autophagosome formation by transmission electron microscopy
USDA-ARS?s Scientific Manuscript database
Autophagy is a complex degradative process by which cytosolic material, including organelles, is randomly sequestered within double-membrane bound vesicles termed autophagosomes and targeted for degradation. Initially described as a nutrient stress adaptation response, the process of autophagy is n...
Correlated randomness: Some examples of exotic statistical physics
NASA Astrophysics Data System (ADS)
Stanley, H. Eugene
2005-05-01
One challenge of biology, medicine, and economics is that the systems treated by these sciences have no perfect metronome in time and no perfect spatial architecture -- crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. To understand this `miracle', one might consider placing aside the human tendency to see the universe as a machine. Instead, one might address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at many spatial and temporal patterns in biology, medicine, and economics. Inspired by principles developed by statistical physics over the past 50 years -- scale invariance and universality -- we review some recent applications of correlated randomness to fields that might startle Boltzmann if he were alive today.
NASA Astrophysics Data System (ADS)
Witteveen, Jeroen A. S.; Bijl, Hester
2009-10-01
The Unsteady Adaptive Stochastic Finite Elements (UASFE) method resolves the effect of randomness in numerical simulations of single-mode aeroelastic responses with a constant accuracy in time for a constant number of samples. In this paper, the UASFE framework is extended to multi-frequency responses and continuous structures by employing a wavelet decomposition pre-processing step to decompose the sampled multi-frequency signals into single-frequency components. The effect of the randomness on the multi-frequency response is then obtained by summing the results of the UASFE interpolation at constant phase for the different frequency components. Results for multi-frequency responses and continuous structures show a three orders of magnitude reduction of computational costs compared to crude Monte Carlo simulations in a harmonically forced oscillator, a flutter panel problem, and the three-dimensional transonic AGARD 445.6 wing aeroelastic benchmark subject to random fields and random parameters with various probability distributions.
Polynomial chaos expansion with random and fuzzy variables
NASA Astrophysics Data System (ADS)
Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.
2016-06-01
A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.
Secure self-calibrating quantum random-bit generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiorentino, M.; Santori, C.; Spillane, S. M.
2007-03-15
Random-bit generators (RBGs) are key components of a variety of information processing applications ranging from simulations to cryptography. In particular, cryptographic systems require 'strong' RBGs that produce high-entropy bit sequences, but traditional software pseudo-RBGs have very low entropy content and therefore are relatively weak for cryptography. Hardware RBGs yield entropy from chaotic or quantum physical systems and therefore are expected to exhibit high entropy, but in current implementations their exact entropy content is unknown. Here we report a quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two-photon polarization states. We introduce and implement a quantum tomographicmore » method to measure a lower bound on the 'min-entropy' of the system, and we employ this value to distill a truly random-bit sequence. This approach is secure: even if an attacker takes control of the source of optical states, a secure random sequence can be distilled.« less
Random Sequence for Optimal Low-Power Laser Generated Ultrasound
NASA Astrophysics Data System (ADS)
Vangi, D.; Virga, A.; Gulino, M. S.
2017-08-01
Low-power laser generated ultrasounds are lately gaining importance in the research world, thanks to the possibility of investigating a mechanical component structural integrity through a non-contact and Non-Destructive Testing (NDT) procedure. The ultrasounds are, however, very low in amplitude, making it necessary to use pre-processing and post-processing operations on the signals to detect them. The cross-correlation technique is used in this work, meaning that a random signal must be used as laser input. For this purpose, a highly random and simple-to-create code called T sequence, capable of enhancing the ultrasound detectability, is introduced (not previously available at the state of the art). Several important parameters which characterize the T sequence can influence the process: the number of pulses Npulses , the pulse duration δ and the distance between pulses dpulses . A Finite Element FE model of a 3 mm steel disk has been initially developed to analytically study the longitudinal ultrasound generation mechanism and the obtainable outputs. Later, experimental tests have shown that the T sequence is highly flexible for ultrasound detection purposes, making it optimal to use high Npulses and δ but low dpulses . In the end, apart from describing all phenomena that arise in the low-power laser generation process, the results of this study are also important for setting up an effective NDT procedure using this technology.
Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray
2014-05-13
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.
NASA Astrophysics Data System (ADS)
Tetsumoto, Tomohiro; Kumazaki, Hajime; Ishida, Rammaru; Tanabe, Takasumi
2018-01-01
Recent progress on the fabrication techniques used in silicon photonics foundries has enabled us to fabricate photonic crystal (PhC) nanocavities using a complementary metal-oxide-semiconductor (CMOS) compatible process. A high Q two-dimensional PhC nanocavity and a one-dimensional nanobeam PhC cavity with a Q exceeding 100 thousand have been fabricated using ArF excimer laser immersion lithography. These are important steps toward the fusion of silicon photonics devices and PhC devices. Although the fabrication must be reproducible for industrial applications, the properties of PhC nanocavities are sensitively affected by the proximity effect and randomness. In this study, we quantitatively investigated the influence of the proximity effect and randomness on a silicon nanobeam PhC cavity. First, we discussed the optical properties of cavities defined with one- and two-step exposure methods, which revealed the necessity of a multi-stage exposure process for our structure. Then, we investigated the impact of block structures placed next to the cavities. The presence of the blocks modified the resonant wavelength of the cavities by about 10 nm. The highest Q we obtained was over 100 thousand. We also discussed the influence of photomask misalignment, which is also a possible cause of disorders in the photolithographic fabrication process. This study will provide useful information for fabricating integrated photonic circuits with PhC nanocavities using a photolithographic process.
Random-Walk Type Model with Fat Tails for Financial Markets
NASA Astrophysics Data System (ADS)
Matuttis, Hans-Geors
Starting from the random-walk model, practices of financial markets are included into the random-walk so that fat tail distributions like those in the high frequency data of the SP500 index are reproduced, though the individual mechanisms are modeled by normally distributed data. The incorporation of local correlation narrows the distribution for "frequent" events, whereas global correlations due to technical analysis leads to fat tails. Delay of market transactions in the trading process shifts the fat tail probabilities downwards. Such an inclusion of reactions to market fluctuations leads to mini-trends which are distributed with unit variance.
Templated Sphere Phase Liquid Crystals for Tunable Random Lasing
Chen, Ziping; Hu, Dechun; Chen, Xingwu; Zeng, Deren; Lee, Yungjui; Chen, Xiaoxian; Lu, Jiangang
2017-01-01
A sphere phase liquid crystal (SPLC) composed of three-dimensional twist structures with disclinations among them exists between isotropic phase and blue phase in a very narrow temperature range, about several degrees centigrade. A low concentration polymer template is applied to improve the thermal stability of SPLCs and broadens the temperature range to more than 448 K. By template processing, a wavelength tunable random lasing is demonstrated with dye doped SPLC. With different polymer concentrations, the reconstructed SPLC random lasing may achieve more than 40 nm wavelength continuous shifting by electric field modulation. PMID:29140283
Smooth invariant densities for random switching on the torus
NASA Astrophysics Data System (ADS)
Bakhtin, Yuri; Hurth, Tobias; Lawley, Sean D.; Mattingly, Jonathan C.
2018-04-01
We consider a random dynamical system obtained by switching between the flows generated by two smooth vector fields on the 2d-torus, with the random switchings happening according to a Poisson process. Assuming that the driving vector fields are transversal to each other at all points of the torus and that each of them allows for a smooth invariant density and no periodic orbits, we prove that the switched system also has a smooth invariant density, for every switching rate. Our approach is based on an integration by parts formula inspired by techniques from Malliavin calculus.
Estimation and classification by sigmoids based on mutual information
NASA Technical Reports Server (NTRS)
Baram, Yoram
1994-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.
Xu, Long; Zhao, Hua; Xu, Caixia; Zhang, Siqi; Zou, Yingyin K; Zhang, Jingwen
2014-02-01
A broadband optical amplification was observed and investigated in Er3+-doped electrostrictive ceramics of lanthanum-modified lead zirconate titanate under a corona atmosphere. The ceramic structure change caused by UV light, electric field, and random walks originated from the diffusive process in intrinsically disordered materials may all contribute to the optical amplification and the associated energy storage. Discussion based on optical energy storage and diffusive equations was given to explain the findings. Those experiments performed made it possible to study random walks and optical amplification in transparent ceramics materials.
1992-08-01
cryptography to the simula- tion of epidemic processes and tests of the intrinsic randomness of quantum mechanics. Discussed in this paper are... theorem concerning the height of a random labelled rooted tree [4]; letting f( s ) = ½ + 82, G(t) = 1 - e-t if t > 0, G(t) = 0 otherwise, and A (0], p...o thi cOoection of infromaltion.% no|uaing• viggitIOns for riviucirg this curoon. to WasnaImllor "@as s n o w, i-i1it,"ersondO Daisr hqhvhwao. SWite
ERIC Educational Resources Information Center
Guldenoglu, Birkan; Miller, Paul; Kargin, Tevhide
2014-01-01
The present study aimed to examine the relationship between letter processing and word processing skills in deaf and hearing readers. The participants were 105 students (51 of them hearing, 54 of them deaf) who were evenly and randomly recruited from two levels of education (primary = 3rd-4th graders; middle = 6th-7th graders). The students were…
Black-Scholes model under subordination
NASA Astrophysics Data System (ADS)
Stanislavsky, A. A.
2003-02-01
In this paper, we consider a new mathematical extension of the Black-Scholes (BS) model in which the stochastic time and stock share price evolution is described by two independent random processes. The parent process is Brownian, and the directing process is inverse to the totally skewed, strictly α-stable process. The subordinated process represents the Brownian motion indexed by an independent, continuous and increasing process. This allows us to introduce the long-term memory effects in the classical BS model.
E. Freeman; G. Moisen; J. Coulston; B. Wilson
2014-01-01
Random forests (RF) and stochastic gradient boosting (SGB), both involving an ensemble of classification and regression trees, are compared for modeling tree canopy cover for the 2011 National Land Cover Database (NLCD). The objectives of this study were twofold. First, sensitivity of RF and SGB to choices in tuning parameters was explored. Second, performance of the...
Nonvolatile GaAs Random-Access Memory
NASA Technical Reports Server (NTRS)
Katti, Romney R.; Stadler, Henry L.; Wu, Jiin-Chuan
1994-01-01
Proposed random-access integrated-circuit electronic memory offers nonvolatile magnetic storage. Bits stored magnetically and read out with Hall-effect sensors. Advantages include short reading and writing times and high degree of immunity to both single-event upsets and permanent damage by ionizing radiation. Use of same basic material for both transistors and sensors simplifies fabrication process, with consequent benefits in increased yield and reduced cost.
Information Selection in Intelligence Processing
2011-12-01
given. Edges connecting nodes representing irrelevant persons with either relevant or irrelevant persons are added randomly, as in an Erdos- Renyi ...graph (Erdos at Renyi , 1959): For each irrelevant node i , and another node j (either relevant or irrelevant) there is a predetermined probability that...statistics for engineering and the sciences (7th ed.). Boston: Duxbury Press. Erdos, P., & Renyi , A. (1959). “On Random Graphs,” Publicationes
Paul L. Patterson; Sara A. Goeking
2012-01-01
The annual forest inventory of New Mexico began as an accelerated inventory, and 8 of the 10 Phase 2 panels were sampled between 2008 and 2011. The inventory includes a large proportion of nonresponse. FIA's estimation process uses post-stratification and assumes that nonresponse occurs at random within each stratum. We construct an estimator for the New Mexico...
Elizabeth A. Freeman; Gretchen G. Moisen; John W. Coulston; Barry T. (Ty) Wilson
2015-01-01
As part of the development of the 2011 National Land Cover Database (NLCD) tree canopy cover layer, a pilot project was launched to test the use of high-resolution photography coupled with extensive ancillary data to map the distribution of tree canopy cover over four study regions in the conterminous US. Two stochastic modeling techniques, random forests (RF...
Yap, Melvin J; Balota, David A; Cortese, Michael J; Watson, Jason M
2006-12-01
This article evaluates 2 competing models that address the decision-making processes mediating word recognition and lexical decision performance: a hybrid 2-stage model of lexical decision performance and a random-walk model. In 2 experiments, nonword type and word frequency were manipulated across 2 contrasts (pseudohomophone-legal nonword and legal-illegal nonword). When nonwords became more wordlike (i.e., BRNTA vs. BRANT vs. BRANE), response latencies to nonwords were slowed and the word frequency effect increased. More important, distributional analyses revealed that the Nonword Type = Word Frequency interaction was modulated by different components of the response time distribution, depending on the specific nonword contrast. A single-process random-walk model was able to account for this particular set of findings more successfully than the hybrid 2-stage model. (c) 2006 APA, all rights reserved.
ERIC Educational Resources Information Center
Hoffart, Asle; Borge, Finn-Magnus; Sexton, Harold; Clark, David M.
2009-01-01
The purpose of this study was to test cognitive and interpersonal models for improving social phobia. Eighty patients with social phobia were randomized to 10-week residential cognitive (RCT) or residential interpersonal psychotherapy (RIPT). They completed process measures every Thursday and a sub-outcome measure every Monday. The ratings were…
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Lisonek, Petr; Soukal, David
2005-03-01
In this paper, we show that the communication channel known as writing in memory with defective cells is a relevant information-theoretical model for a specific case of passive warden steganography when the sender embeds a secret message into a subset C of the cover object X without sharing the selection channel C with the recipient. The set C could be arbitrary, determined by the sender from the cover object using a deterministic, pseudo-random, or a truly random process. We call this steganography "writing on wet paper" and realize it using low-density random linear codes with the encoding step based on the LT process. The importance of writing on wet paper for covert communication is discussed within the context of adaptive steganography and perturbed quantization steganography. Heuristic arguments supported by tests using blind steganalysis indicate that the wet paper steganography provides improved steganographic security for embedding in JPEG images and is less vulnerable to attacks when compared to existing methods with shared selection channels.
An improved exceedance theory for combined random stresses
NASA Technical Reports Server (NTRS)
Lester, H. C.
1974-01-01
An extension is presented of Rice's classic solution for the exceedances of a constant level by a single random process to its counterpart for an n-dimensional vector process. An interaction boundary, analogous to the constant level considered by Rice for the one-dimensional case, is assumed in the form of a hypersurface. The theory for the numbers of boundary exceedances is developed by using a joint statistical approach which fully accounts for all cross-correlation effects. An exact expression is derived for the n-dimensional exceedance density function, which is valid for an arbitrary interaction boundary. For application to biaxial states of combined random stress, the general theory is reduced to the two-dimensional case. An elliptical stress interaction boundary is assumed and the exact expression for the density function is presented. The equations are expressed in a format which facilitates calculating the exceedances by numerically evaluating a line integral. The behavior of the density function for the two-dimensional case is briefly discussed.
Time scale of random sequential adsorption.
Erban, Radek; Chapman, S Jonathan
2007-04-01
A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.
Weighted networks as randomly reinforced urn processes
NASA Astrophysics Data System (ADS)
Caldarelli, Guido; Chessa, Alessandro; Crimaldi, Irene; Pammolli, Fabio
2013-02-01
We analyze weighted networks as randomly reinforced urn processes, in which the edge-total weights are determined by a reinforcement mechanism. We develop a statistical test and a procedure based on it to study the evolution of networks over time, detecting the “dominance” of some edges with respect to the others and then assessing if a given instance of the network is taken at its steady state or not. Distance from the steady state can be considered as a measure of the relevance of the observed properties of the network. Our results are quite general, in the sense that they are not based on a particular probability distribution or functional form of the random weights. Moreover, the proposed tool can be applied also to dense networks, which have received little attention by the network community so far, since they are often problematic. We apply our procedure in the context of the International Trade Network, determining a core of “dominant edges.”
Effectivity of artrihpi irrigation for diabetic ulcer healing: A randomized controlled trial
NASA Astrophysics Data System (ADS)
Gayatri, Dewi; Asmorohadi, Aries; Dahlia, Debie
2018-02-01
The healing process of diabetic ulcer is often impeded by inflammation, infection, and decreased immune state. High pressure irrigation (10-15 psi) may be used to control the infection level. This research was designed to identify the effectiveness of artrihpi irrigation device towards diabetic ulcers in public hospitals in the Central Java. This research is a randomized control trial with cross over design. Sixty four subjects were selected using block randomization technique, and were divided into control and intervention group. The intervention was given in 6 days along with wound healing evaluation in every 3 days. The results demonstrated that there was a significant difference decrease scoring healing after treatment, even though the difference scoring healing between both groups was not statistically significant. However, it means difference was found that in the intervention artrihpi the wound healing was better than the spuit. These results illustrates the artrihpi may be solution of using high pressure irrigation to help healing process diabetic ulcers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cryns, Jackson W.; Hatchell, Brian K.; Santiago-Rojas, Emiliano
Formal journal article Experimental analysis of a piezoelectric energy harvesting system for harmonic, random, and sine on random vibration Abstract: Harvesting power with a piezoelectric vibration powered generator using a full-wave rectifier conditioning circuit is experimentally compared for varying sinusoidal, random and sine on random (SOR) input vibration scenarios. Additionally, the implications of source vibration characteristics on harvester design are discussed. Studies in vibration harvesting have yielded numerous alternatives for harvesting electrical energy from vibrations but piezoceramics arose as the most compact, energy dense means of energy transduction. The rise in popularity of harvesting energy from ambient vibrations has mademore » piezoelectric generators commercially available. Much of the available literature focuses on maximizing harvested power through nonlinear processing circuits that require accurate knowledge of generator internal mechanical and electrical characteristics and idealization of the input vibration source, which cannot be assumed in general application. In this manuscript, variations in source vibration and load resistance are explored for a commercially available piezoelectric generator. We characterize the source vibration by its acceleration response for repeatability and transcription to general application. The results agree with numerical and theoretical predictions for in previous literature that load optimal resistance varies with transducer natural frequency and source type, and the findings demonstrate that significant gains are seen with lower tuned transducer natural frequencies for similar source amplitudes. Going beyond idealized steady state sinusoidal and simplified random vibration input, SOR testing allows for more accurate representation of real world ambient vibration. It is shown that characteristic interactions from more complex vibrational sources significantly alter power generation and power processing requirements by increasing harvested power, shifting optimal conditioning impedance, inducing significant voltage supply fluctuations and ultimately rendering idealized sinusoidal and random analyses insufficient.« less
Quasirandom geometric networks from low-discrepancy sequences
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2017-08-01
We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.
Mobile access to virtual randomization for investigator-initiated trials.
Deserno, Thomas M; Keszei, András P
2017-08-01
Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization becomes available for investigator-initiated trials and potentially for large multi-center trials.
Stochastic stability of parametrically excited random systems
NASA Astrophysics Data System (ADS)
Labou, M.
2004-01-01
Multidegree-of-freedom dynamic systems subjected to parametric excitation are analyzed for stochastic stability. The variation of excitation intensity with time is described by the sum of a harmonic function and a stationary random process. The stability boundaries are determined by the stochastic averaging method. The effect of random parametric excitation on the stability of trivial solutions of systems of differential equations for the moments of phase variables is studied. It is assumed that the frequency of harmonic component falls within the region of combination resonances. Stability conditions for the first and second moments are obtained. It turns out that additional parametric excitation may have a stabilizing or destabilizing effect, depending on the values of certain parameters of random excitation. As an example, the stability of a beam in plane bending is analyzed.
Quantum-like Viewpoint on the Complexity and Randomness of the Financial Market
NASA Astrophysics Data System (ADS)
Choustova, Olga
In economics and financial theory, analysts use random walk and more general martingale techniques to model behavior of asset prices, in particular share prices on stock markets, currency exchange rates and commodity prices. This practice has its basis in the presumption that investors act rationally and without bias, and that at any moment they estimate the value of an asset based on future expectations. Under these conditions, all existing information affects the price, which changes only when new information comes out. By definition, new information appears randomly and influences the asset price randomly. Corresponding continuous time models are based on stochastic processes (this approach was initiated in the thesis of [4]), see, e.g., the books of [33] and [37] for historical and mathematical details.
Random variability explains apparent global clustering of large earthquakes
Michael, A.J.
2011-01-01
The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.
Laser absorption of carbon fiber reinforced polymer with randomly distributed carbon fibers
NASA Astrophysics Data System (ADS)
Hu, Jun; Xu, Hebing; Li, Chao
2018-03-01
Laser processing of carbon fiber reinforced polymer (CFRP) is a non-traditional machining method which has many prospective applications. The laser absorption characteristics of CFRP are analyzed in this paper. A ray tracing model describing the interaction of the laser spot with CFRP is established. The material model contains randomly distributed carbon fibers which are generated using an improved carbon fiber placement method. It was found that CFRP has good laser absorption due to multiple reflections of the light rays in the material’s microstructure. The randomly distributed carbon fibers make the absorptivity of the light rays change randomly in the laser spot. Meanwhile, the average absorptivity fluctuation is obvious during movement of the laser. The experimental measurements agree well with the values predicted by the ray tracing model.
NASA Astrophysics Data System (ADS)
Berezin, Sergey; Zayats, Oleg
2018-01-01
We study a friction-controlled slide of a body excited by random motions of the foundation it is placed on. Specifically, we are interested in such quantities as displacement, traveled distance, and energy loss due to friction. We assume that the random excitation is switched off at some time (possibly infinite) and show that the problem can be treated in an analytic, explicit, manner. Particularly, we derive formulas for the moments of the displacement and distance, and also for the average energy loss. To accomplish that we use the Pugachev-Sveshnikov equation for the characteristic function of a continuous random process given by a system of SDEs. This equation is solved by reduction to a parametric Riemann boundary value problem of complex analysis.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
Controllable lasing performance in solution-processed organic-inorganic hybrid perovskites.
Kao, Tsung Sheng; Chou, Yu-Hsun; Hong, Kuo-Bin; Huang, Jiong-Fu; Chou, Chun-Hsien; Kuo, Hao-Chung; Chen, Fang-Chung; Lu, Tien-Chang
2016-11-03
Solution-processed organic-inorganic perovskites are fascinating due to their remarkable photo-conversion efficiency and great potential in the cost-effective, versatile and large-scale manufacturing of optoelectronic devices. In this paper, we demonstrate that the perovskite nanocrystal sizes can be simply controlled by manipulating the precursor solution concentrations in a two-step sequential deposition process, thus achieving the feasible tunability of excitonic properties and lasing performance in hybrid metal-halide perovskites. The lasing threshold is at around 230 μJ cm -2 in this solution-processed organic-inorganic lead-halide material, which is comparable to the colloidal quantum dot lasers. The efficient stimulated emission originates from the multiple random scattering provided by the micro-meter scale rugged morphology and polycrystalline grain boundaries. Thus the excitonic properties in perovskites exhibit high correlation with the formed morphology of the perovskite nanocrystals. Compared to the conventional lasers normally serving as a coherent light source, the perovskite random lasers are promising in making low-cost thin-film lasing devices for flexible and speckle-free imaging applications.
NASA Astrophysics Data System (ADS)
Korepanov, Alexey
2017-12-01
Let {T : M \\to M} be a nonuniformly expanding dynamical system, such as logistic or intermittent map. Let {v : M \\to R^d} be an observable and {v_n = \\sum_{k=0}^{n-1} v circ T^k} denote the Birkhoff sums. Given a probability measure {μ} on M, we consider v n as a discrete time random process on the probability space {(M, μ)} . In smooth ergodic theory there are various natural choices of {μ} , such as the Lebesgue measure, or the absolutely continuous T-invariant measure. They give rise to different random processes. We investigate relation between such processes. We show that in a large class of measures, it is possible to couple (redefine on a new probability space) every two processes so that they are almost surely close to each other, with explicit estimates of "closeness". The purpose of this work is to close a gap in the proof of the almost sure invariance principle for nonuniformly hyperbolic transformations by Melbourne and Nicol.
Uncertainty quantification applied to the radiological characterization of radioactive waste.
Zaffora, B; Magistris, M; Saporta, G; Chevalier, J-P
2017-09-01
This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gaussian random bridges and a geometric model for information equilibrium
NASA Astrophysics Data System (ADS)
Mengütürk, Levent Ali
2018-03-01
The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.
SUNPLIN: Simulation with Uncertainty for Phylogenetic Investigations
2013-01-01
Background Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. Results In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. Conclusion We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets. PMID:24229408
SUNPLIN: simulation with uncertainty for phylogenetic investigations.
Martins, Wellington S; Carmo, Welton C; Longo, Humberto J; Rosa, Thierson C; Rangel, Thiago F
2013-11-15
Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets.
Enama, Mary E.; Hu, Zonghui; Gordon, Ingelise; Costner, Pamela; Ledgerwood, Julie E.; Grady, Christine
2012-01-01
Background Consent to participate in research is an important component of the conduct of ethical clinical trials. Current consent practices are largely policy-driven. This study was conducted to assess comprehension of study information and satisfaction with the consent form between subjects randomized to concise or to standard informed consent forms as one approach to developing evidence-based consent practices. Methods Participants (N=111) who enrolled into two Phase I investigational influenza vaccine protocols (VRC 306 and VRC 307) at the NIH Clinical Center were randomized to one of two IRB-approved consents; either a standard or concise form. Concise consents had an average of 63% fewer words. All other aspects of the consent process were the same. Questionnaires about the study and the consent process were completed at enrollment and at the last visit in both studies. Results Subjects using concise consent forms scored as well as those using standard length consents in measures of comprehension (7 versus 7, p=0.79 and 20 versus 21, p=0.13), however, the trend was for the concise consent group to report feeling better informed. Both groups thought the length and detail of the consent form was appropriate. Conclusions Randomization of study subjects to different length IRB-approved consents forms as one method for developing evidence-based consent practices, resulted in no differences in study comprehension or satisfaction with the consent form. A concise consent form may be used ethically in the context of a consent process conducted by well-trained staff with opportunities for discussion and education throughout the study. PMID:22542645
Exposure to alcohol advertising and adolescents' drinking beliefs: Role of message interpretation.
Collins, Rebecca L; Martino, Steven C; Kovalchik, Stephanie A; D'Amico, Elizabeth J; Shadel, William G; Becker, Kirsten M; Tolpadi, Anagha
2017-09-01
Recent research revealed momentary associations between exposure to alcohol advertising and positive beliefs about alcohol among adolescents (Martino et al., 2016). We reanalyzed those data to determine whether associations depend on adolescents' appraisal of ads. Over a 10-month period in 2013, 589 youth, ages 11-14, in the Los Angeles, CA, area, participated in a 14-day ecological momentary assessment, logging all exposures to alcohol advertisements as they occurred and completing brief assessments of their skepticism toward, liking of, and identification with any people in each ad, as well as their alcohol-related beliefs at the moment. Participants also completed measures of their alcohol- related beliefs at random moments of nonexposure throughout each day. Mixed-effects regression models compared beliefs about alcohol at moments of exposure to alcohol advertising that was appraised in a particular way (e.g., with liking, without liking) to beliefs at random moments. When youth encountered ads they appraised positively, their beliefs about alcohol were significantly more positive than when they were queried at random moments. Beliefs in the presence of ads that were not positively appraised were generally similar to beliefs at random moments. Youth are active participants in the advertising process. How they respond to and process alcohol advertising strongly moderates the association between exposure and alcohol-related beliefs. More effort is needed to identify attributes of alcohol advertisements, and of youth, that determine how youth process alcohol ads. This information can be used to either limit exposure to problematic ads or make youth more resilient to such exposure. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Random and externally controlled occurrences of Dansgaard-Oeschger events
NASA Astrophysics Data System (ADS)
Lohmann, Johannes; Ditlevsen, Peter D.
2018-05-01
Dansgaard-Oeschger (DO) events constitute the most pronounced mode of centennial to millennial climate variability of the last glacial period. Since their discovery, many decades of research have been devoted to understand the origin and nature of these rapid climate shifts. In recent years, a number of studies have appeared that report emergence of DO-type variability in fully coupled general circulation models via different mechanisms. These mechanisms result in the occurrence of DO events at varying degrees of regularity, ranging from periodic to random. When examining the full sequence of DO events as captured in the North Greenland Ice Core Project (NGRIP) ice core record, one can observe high irregularity in the timing of individual events at any stage within the last glacial period. In addition to the prevailing irregularity, certain properties of the DO event sequence, such as the average event frequency or the relative distribution of cold versus warm periods, appear to be changing throughout the glacial. By using statistical hypothesis tests on simple event models, we investigate whether the observed event sequence may have been generated by stationary random processes or rather was strongly modulated by external factors. We find that the sequence of DO warming events is consistent with a stationary random process, whereas dividing the event sequence into warming and cooling events leads to inconsistency with two independent event processes. As we include external forcing, we find a particularly good fit to the observed DO sequence in a model where the average residence time in warm periods are controlled by global ice volume and cold periods by boreal summer insolation.
A randomized study of a method for optimizing adolescent assent to biomedical research.
Annett, Robert D; Brody, Janet L; Scherer, David G; Turner, Charles W; Dalen, Jeanne; Raissy, Hengameh
2017-01-01
Voluntary consent/assent with adolescents invited to participate in research raises challenging problems. No studies to date have attempted to manipulate autonomy in relation to assent/consent processes. This study evaluated the effects of an autonomy-enhanced individualized assent/consent procedure embedded within a randomized pediatric asthma clinical trial. Families were randomly assigned to remain together or separated during a consent/assent process; the latter we characterize as an autonomy-enhanced assent/consent procedure. We hypothesized that separating adolescents from their parents would improve adolescent assent by increasing knowledge and appreciation of the clinical trial and willingness to participate. Sixty-four adolescent-parent dyads completed procedures. The together versus separate randomization made no difference in adolescent or parent willingness to participate. However, significant differences were found in both parent and adolescent knowledge of the asthma clinical trial based on the assent/consent procedure and adolescent age. The separate assent/consent procedure improved knowledge of study risks and benefits for older adolescents and their parents but not for the younger youth or their parents. Regardless of the assent/consent process, younger adolescents had lower comprehension of information associated with the study medication and research risks and benefits, but not study procedures or their research rights and privileges. The use of an autonomy-enhanced assent/consent procedure for adolescents may improve their and their parent's informed assent/consent without impacting research participation decisions. Traditional assent/consent procedures may result in a "diffusion of responsibility" effect between parents and older adolescents, specifically in attending to key information associated with study risks and benefits.
A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2017-01-01
Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.
Applying the Anderson-Darling test to suicide clusters: evidence of contagion at U. S. universities?
MacKenzie, Donald W
2013-01-01
Suicide clusters at Cornell University and the Massachusetts Institute of Technology (MIT) prompted popular and expert speculation of suicide contagion. However, some clustering is to be expected in any random process. This work tested whether suicide clusters at these two universities differed significantly from those expected under a homogeneous Poisson process, in which suicides occur randomly and independently of one another. Suicide dates were collected for MIT and Cornell for 1990-2012. The Anderson-Darling statistic was used to test the goodness-of-fit of the intervals between suicides to distribution expected under the Poisson process. Suicides at MIT were consistent with the homogeneous Poisson process, while those at Cornell showed clustering inconsistent with such a process (p = .05). The Anderson-Darling test provides a statistically powerful means to identify suicide clustering in small samples. Practitioners can use this method to test for clustering in relevant communities. The difference in clustering behavior between the two institutions suggests that more institutions should be studied to determine the prevalence of suicide clustering in universities and its causes.
Temporal processing and long-latency auditory evoked potential in stutterers.
Prestes, Raquel; de Andrade, Adriana Neves; Santos, Renata Beatriz Fernandes; Marangoni, Andrea Tortosa; Schiefer, Ana Maria; Gil, Daniela
Stuttering is a speech fluency disorder, and may be associated with neuroaudiological factors linked to central auditory processing, including changes in auditory processing skills and temporal resolution. To characterize the temporal processing and long-latency auditory evoked potential in stutterers and to compare them with non-stutterers. The study included 41 right-handed subjects, aged 18-46 years, divided into two groups: stutterers (n=20) and non-stutters (n=21), compared according to age, education, and sex. All subjects were submitted to the duration pattern tests, random gap detection test, and long-latency auditory evoked potential. Individuals who stutter showed poorer performance on Duration Pattern and Random Gap Detection tests when compared with fluent individuals. In the long-latency auditory evoked potential, there was a difference in the latency of N2 and P3 components; stutterers had higher latency values. Stutterers have poor performance in temporal processing and higher latency values for N2 and P3 components. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Extreme values and the level-crossing problem: An application to the Feller process
NASA Astrophysics Data System (ADS)
Masoliver, Jaume
2014-04-01
We review the question of the extreme values attained by a random process. We relate it to level crossings to one boundary (first-passage problems) as well as to two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value, and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes.
Modeling methodology for MLS range navigation system errors using flight test data
NASA Technical Reports Server (NTRS)
Karmali, M. S.; Phatak, A. V.
1982-01-01
Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.
Lemasters, John J
2005-01-01
In autophagy, portions of cytoplasm are sequestered into autophagosomes and delivered to lysosomes for degradation. Long assumed to be a random process, increasing evidence suggests that autophagy of mitochondria, peroxisomes, and possibly other organelles is selective. A recent paper (Kissova et al., J. Biol. Chem. 2004;279:39068-39074) shows in yeast that a specific outer membrane protein, Uth1p, is required for efficient mitochondrial autophagy. For this selective autophagy of mitochondria, we propose the term "mitophagy" to emphasize the non-random nature of the process. Mitophagy may play a key role in retarding accumulation of somatic mutations of mtDNA with aging.
Earthquake prediction: the interaction of public policy and science.
Jones, L M
1996-01-01
Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake. PMID:11607656
NASA Astrophysics Data System (ADS)
Sabra, K.
2006-12-01
The random nature of noise and scattered fields tends to suggest limited utility. Indeed, seismic or acoustic fields from random sources or scatterers are often considered to be incoherent, but there is some coherence between two sensors that receive signals from the same individual source or scatterer. An estimate of the Green's function (or impulse response) between two points can be obtained from the cross-correlation of random wavefields recorded at these two points. Recent theoretical and experimental studies in ultrasonics, underwater acoustics, structural monitoring and seismology have investigated this technique in various environments and frequency ranges. These results provide a means for passive imaging using only the random wavefields, without the use of active sources. The coherent wavefronts emerge from a correlation process that accumulates contributions over time from random sources whose propagation paths pass through both receivers. Results will be presented from experiments using ambient noise cross-correlations for the following applications: 1) passive surface waves tomography from ocean microseisms and 2) structural health monitoring of marine and airborne structures embedded in turbulent flow.
Randomized central limit theorems: A unified theory.
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Randomized central limit theorems: A unified theory
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Fatigue Damage Spectrum calculation in a Mission Synthesis procedure for Sine-on-Random excitations
NASA Astrophysics Data System (ADS)
Angeli, Andrea; Cornelis, Bram; Troncossi, Marco
2016-09-01
In many real-life environments, certain mechanical and electronic components may be subjected to Sine-on-Random vibrations, i.e. excitations composed of random vibrations superimposed on deterministic (sinusoidal) contributions, in particular sine tones due to some rotating parts of the system (e.g. helicopters, engine-mounted components,...). These components must be designed to withstand the fatigue damage induced by the “composed” vibration environment, and qualification tests are advisable for the most critical ones. In the case of an accelerated qualification test, a proper test tailoring which starts from the real environment (measured vibration signals) and which preserves not only the accumulated fatigue damage but also the “nature” of the excitation (i.e. sinusoidal components plus random process) is important to obtain reliable results. In this paper, the classic time domain approach is taken as a reference for the comparison of different methods for the Fatigue Damage Spectrum (FDS) calculation in case of Sine-on-Random vibration environments. Then, a methodology to compute a Sine-on-Random specification based on a mission FDS is proposed.
Collective relaxation dynamics of small-world networks
NASA Astrophysics Data System (ADS)
Grabow, Carsten; Grosskinsky, Stefan; Kurths, Jürgen; Timme, Marc
2015-05-01
Complex networks exhibit a wide range of collective dynamic phenomena, including synchronization, diffusion, relaxation, and coordination processes. Their asymptotic dynamics is generically characterized by the local Jacobian, graph Laplacian, or a similar linear operator. The structure of networks with regular, small-world, and random connectivities are reasonably well understood, but their collective dynamical properties remain largely unknown. Here we present a two-stage mean-field theory to derive analytic expressions for network spectra. A single formula covers the spectrum from regular via small-world to strongly randomized topologies in Watts-Strogatz networks, explaining the simultaneous dependencies on network size N , average degree k , and topological randomness q . We present simplified analytic predictions for the second-largest and smallest eigenvalue, and numerical checks confirm our theoretical predictions for zero, small, and moderate topological randomness q , including the entire small-world regime. For large q of the order of one, we apply standard random matrix theory, thereby overarching the full range from regular to randomized network topologies. These results may contribute to our analytic and mechanistic understanding of collective relaxation phenomena of network dynamical systems.
Collective relaxation dynamics of small-world networks.
Grabow, Carsten; Grosskinsky, Stefan; Kurths, Jürgen; Timme, Marc
2015-05-01
Complex networks exhibit a wide range of collective dynamic phenomena, including synchronization, diffusion, relaxation, and coordination processes. Their asymptotic dynamics is generically characterized by the local Jacobian, graph Laplacian, or a similar linear operator. The structure of networks with regular, small-world, and random connectivities are reasonably well understood, but their collective dynamical properties remain largely unknown. Here we present a two-stage mean-field theory to derive analytic expressions for network spectra. A single formula covers the spectrum from regular via small-world to strongly randomized topologies in Watts-Strogatz networks, explaining the simultaneous dependencies on network size N, average degree k, and topological randomness q. We present simplified analytic predictions for the second-largest and smallest eigenvalue, and numerical checks confirm our theoretical predictions for zero, small, and moderate topological randomness q, including the entire small-world regime. For large q of the order of one, we apply standard random matrix theory, thereby overarching the full range from regular to randomized network topologies. These results may contribute to our analytic and mechanistic understanding of collective relaxation phenomena of network dynamical systems.
The one-dimensional asymmetric persistent random walk
NASA Astrophysics Data System (ADS)
Rossetto, Vincent
2018-04-01
Persistent random walks are intermediate transport processes between a uniform rectilinear motion and a Brownian motion. They are formed by successive steps of random finite lengths and directions travelled at a fixed speed. The isotropic and symmetric 1D persistent random walk is governed by the telegrapher’s equation, also called the hyperbolic heat conduction equation. These equations have been designed to resolve the paradox of the infinite speed in the heat and diffusion equations. The finiteness of both the speed and the correlation length leads to several classes of random walks: Persistent random walk in one dimension can display anomalies that cannot arise for Brownian motion such as anisotropy and asymmetries. In this work we focus on the case where the mean free path is anisotropic, the only anomaly leading to a physics that is different from the telegrapher’s case. We derive exact expression of its Green’s function, for its scattering statistics and distribution of first-passage time at the origin. The phenomenology of the latter shows a transition for quantities like the escape probability and the residence time.
Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino
2012-01-01
Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...
Method for removal of random noise in eddy-current testing system
Levy, Arthur J.
1995-01-01
Eddy-current response voltages, generated during inspection of metallic structures for anomalies, are often replete with noise. Therefore, analysis of the inspection data and results is difficult or near impossible, resulting in inconsistent or unreliable evaluation of the structure. This invention processes the eddy-current response voltage, removing the effect of random noise, to allow proper identification of anomalies within and associated with the structure.
Ages of Records in Random Walks
NASA Astrophysics Data System (ADS)
Szabó, Réka; Vető, Bálint
2016-12-01
We consider random walks with continuous and symmetric step distributions. We prove universal asymptotics for the average proportion of the age of the kth longest lasting record for k=1,2,ldots and for the probability that the record of the kth longest age is broken at step n. Due to the relation to the Chinese restaurant process, the ranked sequence of proportions of ages converges to the Poisson-Dirichlet distribution.
Response of space shuttle insulation panels to acoustic noise pressure
NASA Technical Reports Server (NTRS)
Vaicaitis, R.
1976-01-01
The response of reusable space shuttle insulation panels to random acoustic pressure fields are studied. The basic analytical approach in formulating the governing equations of motion uses a Rayleigh-Ritz technique. The input pressure field is modeled as a stationary Gaussian random process for which the cross-spectral density function is known empirically from experimental measurements. The response calculations are performed in both frequency and time domain.
Comparison of Image Processing Techniques using Random Noise Radar
2014-03-27
detection UWB ultra-wideband EM electromagnetic CW continuous wave RCS radar cross section RFI radio frequency interference FFT fast Fourier transform...several factors including radar cross section (RCS), orientation, and material makeup. A single monostatic radar at some position collects only range and...Chapter 2 is to provide the theory behind noise radar and SAR imaging. Section 2.1 presents the basic concepts in transmitting and receiving random
[On the extinction of populations with several types in a random environment].
Bacaër, Nicolas
2018-03-01
This study focuses on the extinction rate of a population that follows a continuous-time multi-type branching process in a random environment. Numerical computations in a particular example inspired by an epidemic model suggest an explicit formula for this extinction rate, but only for certain parameter values. Copyright © 2018 Académie des sciences. Published by Elsevier Masson SAS. All rights reserved.
ERIC Educational Resources Information Center
Bonnesen, C. T.; Plauborg, R.; Denbaek, A. M.; Due, P.; Johansen, A.
2015-01-01
The Hi Five study was a three-armed cluster randomized controlled trial designed to reduce infections and improve hygiene and well-being among pupils. Participating schools (n = 43) were randomized into either control (n = 15) or one of two intervention groups (n = 28). The intervention consisted of three components: (i) a curriculum (ii)…
NASA Astrophysics Data System (ADS)
Pospisil, J.; Jakubik, P.; Machala, L.
2005-11-01
This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.
Generating and using truly random quantum states in Mathematica
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2012-01-01
The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.
Experimentally generated randomness certified by the impossibility of superluminal signals.
Bierhorst, Peter; Knill, Emanuel; Glancy, Scott; Zhang, Yanbao; Mink, Alan; Jordan, Stephen; Rommal, Andrea; Liu, Yi-Kai; Christensen, Bradley; Nam, Sae Woo; Stevens, Martin J; Shalm, Lynden K
2018-04-01
From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable 1-3 . For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity 1-11 . With recent technological developments, it is now possible to carry out such a loophole-free Bell test 12-14,22 . Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10 -12 . These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.
Research in Stochastic Processes.
1983-10-01
increases. A more detailed investigation for the exceedances themselves (rather than Just the cluster centers) was undertaken, together with J. HUsler and...J. HUsler and M.R. Leadbetter, Compoung Poisson limit theorems for high level exceedances by stationary sequences, Center for Stochastic Processes...stability by a random linear operator. C.D. Hardin, General (asymmetric) stable variables and processes. T. Hsing, J. HUsler and M.R. Leadbetter, Compound
ERIC Educational Resources Information Center
Kessel, Robert; Lucke, Robert L.
2008-01-01
Shull, Gaynor and Grimes advanced a model for interresponse time distribution using probabilistic cycling between a higher-rate and a lower-rate response process. Both response processes are assumed to be random in time with a constant rate. The cycling between the two processes is assumed to have a constant transition probability that is…
Nonequivalence of updating rules in evolutionary games under high mutation rates.
Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
The human as a detector of changes in variance and bandwidth
NASA Technical Reports Server (NTRS)
Curry, R. E.; Govindaraj, T.
1977-01-01
The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.
Computer modeling of dynamic necking in bars
NASA Astrophysics Data System (ADS)
Partom, Yehuda; Lindenfeld, Avishay
2017-06-01
Necking of thin bodies (bars, plates, shells) is one form of strain localization in ductile materials that may lead to fracture. The phenomenon of necking has been studied extensively, initially for quasistatic loading and then also for dynamic loading. Nevertheless, many issues concerning necking are still unclear. Among these are: 1) is necking a random or deterministic process; 2) how does the specimen choose the final neck location; 3) to what extent do perturbations (material or geometrical) influence the neck forming process; and 4) how do various parameters (material, geometrical, loading) influence the neck forming process. Here we address these issues and others using computer simulations with a hydrocode. Among other things we find that: 1) neck formation is a deterministic process, and by changing one of the parameters influencing it monotonously, the final neck location moves monotonously as well; 2) the final neck location is sensitive to the radial velocity of the end boundaries, and as motion of these boundaries is not fully controlled in tests, this may be the reason why neck formation is sometimes regarded as a random process; and 3) neck formation is insensitive to small perturbations, which is probably why it is a deterministic process.
Motes, Michael A; Yezhuvath, Uma S; Aslan, Sina; Spence, Jeffrey S; Rypma, Bart; Chapman, Sandra B
2018-02-01
Higher-order cognitive training has shown to enhance performance in older adults, but the neural mechanisms underlying performance enhancement have yet to be fully disambiguated. This randomized trial examined changes in processing speed and processing speed-related neural activity in older participants (57-71 years of age) who underwent cognitive training (CT, N = 12) compared with wait-listed (WLC, N = 15) or exercise-training active (AC, N = 14) controls. The cognitive training taught cognitive control functions of strategic attention, integrative reasoning, and innovation over 12 weeks. All 3 groups worked through a functional magnetic resonance imaging processing speed task during 3 sessions (baseline, mid-training, and post-training). Although all groups showed faster reaction times (RTs) across sessions, the CT group showed a significant increase, and the WLC and AC groups showed significant decreases across sessions in the association between RT and BOLD signal change within the left prefrontal cortex (PFC). Thus, cognitive training led to a change in processing speed-related neural activity where faster processing speed was associated with reduced PFC activation, fitting previously identified neural efficiency profiles. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Nonequivalence of updating rules in evolutionary games under high mutation rates
NASA Astrophysics Data System (ADS)
Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Determining Scale-dependent Patterns in Spatial and Temporal Datasets
NASA Astrophysics Data System (ADS)
Roy, A.; Perfect, E.; Mukerji, T.; Sylvester, L.
2016-12-01
Spatial and temporal datasets of interest to Earth scientists often contain plots of one variable against another, e.g., rainfall magnitude vs. time or fracture aperture vs. spacing. Such data, comprised of distributions of events along a transect / timeline along with their magnitudes, can display persistent or antipersistent trends, as well as random behavior, that may contain signatures of underlying physical processes. Lacunarity is a technique that was originally developed for multiscale analysis of data. In a recent study we showed that lacunarity can be used for revealing changes in scale-dependent patterns in fracture spacing data. Here we present a further improvement in our technique, with lacunarity applied to various non-binary datasets comprised of event spacings and magnitudes. We test our technique on a set of four synthetic datasets, three of which are based on an autoregressive model and have magnitudes at every point along the "timeline" thus representing antipersistent, persistent, and random trends. The fourth dataset is made up of five clusters of events, each containing a set of random magnitudes. The concept of lacunarity ratio, LR, is introduced; this is the lacunarity of a given dataset normalized to the lacunarity of its random counterpart. It is demonstrated that LR can successfully delineate scale-dependent changes in terms of antipersistence and persistence in the synthetic datasets. This technique is then applied to three different types of data: a hundred-year rainfall record from Knoxville, TN, USA, a set of varved sediments from Marca Shale, and a set of fracture aperture and spacing data from NE Mexico. While the rainfall data and varved sediments both appear to be persistent at small scales, at larger scales they both become random. On the other hand, the fracture data shows antipersistence at small scale (within cluster) and random behavior at large scales. Such differences in behavior with respect to scale-dependent changes in antipersistence to random, persistence to random, or otherwise, maybe be related to differences in the physicochemical properties and processes contributing to multiscale datasets.
Cavity master equation for the continuous time dynamics of discrete-spin models.
Aurell, E; Del Ferraro, G; Domínguez, E; Mulet, R
2017-05-01
We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.
Cavity master equation for the continuous time dynamics of discrete-spin models
NASA Astrophysics Data System (ADS)
Aurell, E.; Del Ferraro, G.; Domínguez, E.; Mulet, R.
2017-05-01
We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.
Modelling nematode movement using time-fractional dynamics.
Hapca, Simona; Crawford, John W; MacMillan, Keith; Wilson, Mike J; Young, Iain M
2007-09-07
We use a correlated random walk model in two dimensions to simulate the movement of the slug parasitic nematode Phasmarhabditis hermaphrodita in homogeneous environments. The model incorporates the observed statistical distributions of turning angle and speed derived from time-lapse studies of individual nematode trails. We identify strong temporal correlations between the turning angles and speed that preclude the case of a simple random walk in which successive steps are independent. These correlated random walks are appropriately modelled using an anomalous diffusion model, more precisely using a fractional sub-diffusion model for which the associated stochastic process is characterised by strong memory effects in the probability density function.
Transcription, intercellular variability and correlated random walk.
Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar
2008-11-01
We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.
Cognitive Processes that Underlie Mathematical Precociousness in Young Children
ERIC Educational Resources Information Center
Swanson, H. Lee
2006-01-01
The working memory (WM) processes that underlie young children's (ages 6-8 years) mathematical precociousness were examined. A battery of tests that assessed components of WM (phonological loop, visual-spatial sketchpad, and central executive), naming speed, random generation, and fluency was administered to mathematically precocious and…
Effects of Concurrent Music Listening on Emotional Processing
ERIC Educational Resources Information Center
Graham, Rodger; Robinson, Johanna; Mulhall, Peter
2009-01-01
Increased processing time for threatening stimuli is a reliable finding in emotional Stroop tasks. This is particularly pronounced among individuals with anxiety disorders and reflects heightened attentional bias for perceived threat. In this repeated measures study, 35 healthy participants completed a randomized series of Stroop tasks involving…
Security of practical private randomness generation
NASA Astrophysics Data System (ADS)
Pironio, Stefano; Massar, Serge
2013-01-01
Measurements on entangled quantum systems necessarily yield outcomes that are intrinsically unpredictable if they violate a Bell inequality. This property can be used to generate certified randomness in a device-independent way, i.e., without making detailed assumptions about the internal working of the quantum devices used to generate the random numbers. Furthermore these numbers are also private; i.e., they appear random not only to the user but also to any adversary that might possess a perfect description of the devices. Since this process requires a small initial random seed to sample the behavior of the quantum devices and to extract uniform randomness from the raw outputs of the devices, one usually speaks of device-independent randomness expansion. The purpose of this paper is twofold. First, we point out that in most real, practical situations, where the concept of device independence is used as a protection against unintentional flaws or failures of the quantum apparatuses, it is sufficient to show that the generated string is random with respect to an adversary that holds only classical side information; i.e., proving randomness against quantum side information is not necessary. Furthermore, the initial random seed does not need to be private with respect to the adversary, provided that it is generated in a way that is independent from the measured systems. The devices, however, will generate cryptographically secure randomness that cannot be predicted by the adversary, and thus one can, given access to free public randomness, talk about private randomness generation. The theoretical tools to quantify the generated randomness according to these criteria were already introduced in S. Pironio [Nature (London)NATUAS0028-083610.1038/nature09008 464, 1021 (2010)], but the final results were improperly formulated. The second aim of this paper is to correct this inaccurate formulation and therefore lay out a precise theoretical framework for practical device-independent randomness generation.
Li, Yihe; Li, Bofeng; Gao, Yang
2015-01-01
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Pototzky, Anthony S.
1989-01-01
A theoretical basis and example calculations are given that demonstrate the relationship between the Matched Filter Theory approach to the calculation of time-correlated gust loads and Phased Design Load Analysis in common use in the aerospace industry. The relationship depends upon the duality between Matched Filter Theory and Random Process Theory and upon the fact that Random Process Theory is used in Phased Design Loads Analysis in determining an equiprobable loads design ellipse. Extensive background information describing the relevant points of Phased Design Loads Analysis, calculating time-correlated gust loads with Matched Filter Theory, and the duality between Matched Filter Theory and Random Process Theory is given. It is then shown that the time histories of two time-correlated gust load responses, determined using the Matched Filter Theory approach, can be plotted as parametric functions of time and that the resulting plot, when superposed upon the design ellipse corresponding to the two loads, is tangent to the ellipse. The question is raised of whether or not it is possible for a parametric load plot to extend outside the associated design ellipse. If it is possible, then the use of the equiprobable loads design ellipse will not be a conservative design practice in some circumstances.
Effects of practice schedule and task specificity on the adaptive process of motor learning.
Barros, João Augusto de Camargo; Tani, Go; Corrêa, Umberto Cesar
2017-10-01
This study investigated the effects of practice schedule and task specificity based on the perspective of adaptive process of motor learning. For this purpose, tasks with temporal and force control learning requirements were manipulated in experiments 1 and 2, respectively. Specifically, the task consisted of touching with the dominant hand the three sequential targets with specific movement time or force for each touch. Participants were children (N=120), both boys and girls, with an average age of 11.2years (SD=1.0). The design in both experiments involved four practice groups (constant, random, constant-random, and random-constant) and two phases (stabilisation and adaptation). The dependent variables included measures related to the task goal (accuracy and variability of error of the overall movement and force patterns) and movement pattern (macro- and microstructures). Results revealed a similar error of the overall patterns for all groups in both experiments and that they adapted themselves differently in terms of the macro- and microstructures of movement patterns. The study concludes that the effects of practice schedules on the adaptive process of motor learning were both general and specific to the task. That is, they were general to the task goal performance and specific regarding the movement pattern. Copyright © 2017 Elsevier B.V. All rights reserved.
Li, Yihe; Li, Bofeng; Gao, Yang
2015-11-30
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.
Kosmidis, Mary H.; Zampakis, Petros; Malefaki, Sonia; Ntoskou, Katerina; Nousia, Anastasia; Bakirtzis, Christos; Papathanasopoulos, Panagiotis
2017-01-01
Cognitive impairment is frequently encountered in multiple sclerosis (MS) affecting between 40–65% of individuals, irrespective of disease duration and severity of physical disability. In the present multicenter randomized controlled trial, fifty-eight clinically stable RRMS patients with mild to moderate cognitive impairment and relatively low disability status were randomized to receive either computer-assisted (RehaCom) functional cognitive training with an emphasis on episodic memory, information processing speed/attention, and executive functions for 10 weeks (IG; n = 32) or standard clinical care (CG; n = 26). Outcome measures included a flexible comprehensive neuropsychological battery of tests sensitive to MS patient deficits and feedback regarding personal benefit gained from the intervention on four verbal questions. Only the IG group showed significant improvements in verbal and visuospatial episodic memory, processing speed/attention, and executive functioning from pre - to postassessment. Moreover, the improvement obtained on attention was retained over 6 months providing evidence on the long-term benefits of this intervention. Group by time interactions revealed significant improvements in composite cognitive domain scores in the IG relative to the demographically and clinically matched CG for verbal episodic memory, processing speed, verbal fluency, and attention. Treated patients rated the intervention positively and were more confident about their cognitive abilities following treatment. PMID:29463950
Reynolds, Andy M
2010-12-06
For many years, the dominant conceptual framework for describing non-oriented animal movement patterns has been the correlated random walk (CRW) model in which an individual's trajectory through space is represented by a sequence of distinct, independent randomly oriented 'moves'. It has long been recognized that the transformation of an animal's continuous movement path into a broken line is necessarily arbitrary and that probability distributions of move lengths and turning angles are model artefacts. Continuous-time analogues of CRWs that overcome this inherent shortcoming have appeared in the literature and are gaining prominence. In these models, velocities evolve as a Markovian process and have exponential autocorrelation. Integration of the velocity process gives the position process. Here, through a simple scaling argument and through an exact analytical analysis, it is shown that autocorrelation inevitably leads to Lévy walk (LW) movement patterns on timescales less than the autocorrelation timescale. This is significant because over recent years there has been an accumulation of evidence from a variety of experimental and theoretical studies that many organisms have movement patterns that can be approximated by LWs, and there is now intense debate about the relative merits of CRWs and LWs as representations of non-orientated animal movement patterns.
Phase transitions in the quadratic contact process on complex networks
NASA Astrophysics Data System (ADS)
Varghese, Chris; Durrett, Rick
2013-06-01
The quadratic contact process (QCP) is a natural extension of the well-studied linear contact process where infected (1) individuals infect susceptible (0) neighbors at rate λ and infected individuals recover (10) at rate 1. In the QCP, a combination of two 1's is required to effect a 01 change. We extend the study of the QCP, which so far has been limited to lattices, to complex networks. We define two versions of the QCP: vertex-centered (VQCP) and edge-centered (EQCP) with birth events 1-0-11-1-1 and 1-1-01-1-1, respectively, where “-” represents an edge. We investigate the effects of network topology by considering the QCP on random regular, Erdős-Rényi, and power-law random graphs. We perform mean-field calculations as well as simulations to find the steady-state fraction of occupied vertices as a function of the birth rate. We find that on the random regular and Erdős-Rényi graphs, there is a discontinuous phase transition with a region of bistability, whereas on the heavy-tailed power-law graph, the transition is continuous. The critical birth rate is found to be positive in the former but zero in the latter.
Collective dynamics during cell division
NASA Astrophysics Data System (ADS)
Zapperi, Stefano; Bertalan, Zsolt; Budrikis, Zoe; La Porta, Caterina A. M.
In order to correctly divide, cells have to move all their chromosomes at the center, a process known as congression. This task is performed by the combined action of molecular motors and randomly growing and shrinking microtubules. Chromosomes are captured by growing microtubules and transported by motors using the same microtubules as tracks. Coherent motion occurs as a result of a large collection of random and deterministic dynamical events. Understanding this process is important since a failure in chromosome segregation can lead to chromosomal instability one of the hallmarks of cancer. We describe this complex process in a three dimensional computational model involving thousands of microtubules. The results show that coherent and robust chromosome congression can only happen if the total number of microtubules is neither too small, nor too large. Our results allow for a coherent interpretation a variety of biological factors already associated in the past with chromosomal instability and related pathological conditions.
Multiobjective optimization in structural design with uncertain parameters and stochastic processes
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.
NASA Astrophysics Data System (ADS)
Sarkar, Biplab; Mills, Steven; Lee, Bongmook; Pitts, W. Shepherd; Misra, Veena; Franzon, Paul D.
2018-02-01
In this work, we report on mimicking the synaptic forgetting process using the volatile mem-capacitive effect of a resistive random access memory (RRAM). TiO2 dielectric, which is known to show volatile memory operations due to migration of inherent oxygen vacancies, was used to achieve the volatile mem-capacitive effect. By placing the volatile RRAM candidate along with SiO2 at the gate of a MOS capacitor, a volatile capacitance change resembling the forgetting nature of a human brain is demonstrated. Furthermore, the memory operation in the MOS capacitor does not require a current flow through the gate dielectric indicating the feasibility of obtaining low power memory operations. Thus, the mem-capacitive effect of volatile RRAM candidates can be attractive to the future neuromorphic systems for implementing the forgetting process of a human brain.
Online games: a novel approach to explore how partial information influences human random searches
NASA Astrophysics Data System (ADS)
Martínez-García, Ricardo; Calabrese, Justin M.; López, Cristóbal
2017-01-01
Many natural processes rely on optimizing the success ratio of a search process. We use an experimental setup consisting of a simple online game in which players have to find a target hidden on a board, to investigate how the rounds are influenced by the detection of cues. We focus on the search duration and the statistics of the trajectories traced on the board. The experimental data are explained by a family of random-walk-based models and probabilistic analytical approximations. If no initial information is given to the players, the search is optimized for cues that cover an intermediate spatial scale. In addition, initial information about the extension of the cues results, in general, in faster searches. Finally, strategies used by informed players turn into non-stationary processes in which the length of e ach displacement evolves to show a well-defined characteristic scale that is not found in non-informed searches.
Online games: a novel approach to explore how partial information influences human random searches.
Martínez-García, Ricardo; Calabrese, Justin M; López, Cristóbal
2017-01-06
Many natural processes rely on optimizing the success ratio of a search process. We use an experimental setup consisting of a simple online game in which players have to find a target hidden on a board, to investigate how the rounds are influenced by the detection of cues. We focus on the search duration and the statistics of the trajectories traced on the board. The experimental data are explained by a family of random-walk-based models and probabilistic analytical approximations. If no initial information is given to the players, the search is optimized for cues that cover an intermediate spatial scale. In addition, initial information about the extension of the cues results, in general, in faster searches. Finally, strategies used by informed players turn into non-stationary processes in which the length of e ach displacement evolves to show a well-defined characteristic scale that is not found in non-informed searches.
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
Platten, Ulla; Rantala, Johanna; Lindblom, Annika; Brandberg, Yvonne; Lindgren, Gunilla; Arver, Brita
2012-09-01
Increased demand for genetic counseling services necessitates exploring alternatives to in-person counseling. Telephone counseling is a less time-consuming and more cost-effective alternative. So far there is insufficient evidence to support a pre-counseling telephone model. This randomized questionnaire study aims to evaluate the oncogenetic counseling process and to compare the impact of the initial part of the oncogenetic counseling, when conducted via telephone versus in-person. The aspects of evaluations were: patients' expectations, satisfaction and experiences of genetic counseling, worry for developing hereditary cancer and health related quality of life. A total of 215 participants representing several cancer syndromes were randomized to counseling via telephone or in-person. The questionnaires were completed before and after oncogenetic nurse counseling, and 1 year after the entire counseling process. Overall, a high satisfaction rate with the oncogenetic counseling process was found among the participants regardless of whether the oncogenetic nurse counseling was conducted by telephone or in-person. The results show that a considerable number of participants experienced difficulties with the process of creating a pedigree and dissatisfaction with information on surveillance and prevention. Affected participants reported lower levels in most SF-36 domains compared to non-affected and both groups reported lower levels as compared to a Swedish reference group. The results indicate that telephone pre-counseling works as well as in-person counseling. Emotional support during genetic counseling and information on recommended cancer prevention and surveillance should be improved.