Reliability analysis of structures under periodic proof tests in service
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1976-01-01
A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.
Some functional limit theorems for compound Cox processes
NASA Astrophysics Data System (ADS)
Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.
2016-06-01
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Some functional limit theorems for compound Cox processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.
2016-06-08
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Scaling Limit of Symmetric Random Walk in High-Contrast Periodic Environment
NASA Astrophysics Data System (ADS)
Piatnitski, A.; Zhizhina, E.
2017-11-01
The paper deals with the asymptotic properties of a symmetric random walk in a high contrast periodic medium in Z^d, d≥1. From the existing homogenization results it follows that under diffusive scaling the limit behaviour of this random walk need not be Markovian. The goal of this work is to show that if in addition to the coordinate of the random walk in Z^d we introduce an extra variable that characterizes the position of the random walk inside the period then the limit dynamics of this two-component process is Markov. We describe the limit process and observe that the components of the limit process are coupled. We also prove the convergence in the path space for the said random walk.
On fatigue crack growth under random loading
NASA Astrophysics Data System (ADS)
Zhu, W. Q.; Lin, Y. K.; Lei, Y.
1992-09-01
A probabilistic analysis of the fatigue crack growth, fatigue life and reliability of a structural or mechanical component is presented on the basis of fracture mechanics and theory of random processes. The material resistance to fatigue crack growth and the time-history of the stress are assumed to be random. Analytical expressions are obtained for the special case in which the random stress is a stationary narrow-band Gaussian random process, and a randomized Paris-Erdogan law is applicable. As an example, the analytical method is applied to a plate with a central crack, and the results are compared with those obtained from digital Monte Carlo simulations.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Qingda, E-mail: weiqd@hqu.edu.cn; Chen, Xian, E-mail: chenxian@amss.ac.cn
In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation andmore » obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.« less
Daniels, Marcus G; Farmer, J Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-14
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
NASA Astrophysics Data System (ADS)
Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-01
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
Motion Among Random Obstacles on a Hyperbolic Space
NASA Astrophysics Data System (ADS)
Orsingher, Enzo; Ricciuti, Costantino; Sisti, Francesco
2016-02-01
We consider the motion of a particle along the geodesic lines of the Poincaré half-plane. The particle is specularly reflected when it hits randomly-distributed obstacles that are assumed to be motionless. This is the hyperbolic version of the well-known Lorentz Process studied in the Euclidean context. We analyse the limit in which the density of the obstacles increases to infinity and the size of each obstacle vanishes: under a suitable scaling, we prove that our process converges to a Markovian process, namely a random flight on the hyperbolic manifold.
Nonstationary envelope process and first excursion probability
NASA Technical Reports Server (NTRS)
Yang, J.
1972-01-01
A definition of the envelope of nonstationary random processes is proposed. The establishment of the envelope definition makes it possible to simulate the nonstationary random envelope directly. Envelope statistics, such as the density function, joint density function, moment function, and level crossing rate, which are relevent to analyses of catastrophic failure, fatigue, and crack propagation in structures, are derived. Applications of the envelope statistics to the prediction of structural reliability under random loadings are discussed in detail.
Perception of Randomness: On the Time of Streaks
ERIC Educational Resources Information Center
Sun, Yanlong; Wang, Hongbin
2010-01-01
People tend to think that streaks in random sequential events are rare and remarkable. When they actually encounter streaks, they tend to consider the underlying process as non-random. The present paper examines the time of pattern occurrences in sequences of Bernoulli trials, and shows that among all patterns of the same length, a streak is the…
Melnikov processes and chaos in randomly perturbed dynamical systems
NASA Astrophysics Data System (ADS)
Yagasaki, Kazuyuki
2018-07-01
We consider a wide class of randomly perturbed systems subjected to stationary Gaussian processes and show that chaotic orbits exist almost surely under some nondegenerate condition, no matter how small the random forcing terms are. This result is very contrasting to the deterministic forcing case, in which chaotic orbits exist only if the influence of the forcing terms overcomes that of the other terms in the perturbations. To obtain the result, we extend Melnikov’s method and prove that the corresponding Melnikov functions, which we call the Melnikov processes, have infinitely many zeros, so that infinitely many transverse homoclinic orbits exist. In addition, a theorem on the existence and smoothness of stable and unstable manifolds is given and the Smale–Birkhoff homoclinic theorem is extended in an appropriate form for randomly perturbed systems. We illustrate our theory for the Duffing oscillator subjected to the Ornstein–Uhlenbeck process parametrically.
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
Large deviations and mixing for dissipative PDEs with unbounded random kicks
NASA Astrophysics Data System (ADS)
Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.
2018-02-01
We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.
Nonlinear Fatigue Damage Model Based on the Residual Strength Degradation Law
NASA Astrophysics Data System (ADS)
Yongyi, Gao; Zhixiao, Su
In this paper, a logarithmic expression to describe the residual strength degradation process is developed in order to fatigue test results for normalized carbon steel. The definition and expression of fatigue damage due to symmetrical stress with a constant amplitude are also given. The expression of fatigue damage can also explain the nonlinear properties of fatigue damage. Furthermore, the fatigue damage of structures under random stress is analyzed, and an iterative formula to describe the fatigue damage process is deduced. Finally, an approximate method for evaluating the fatigue life of structures under repeated random stress blocking is presented through various calculation examples.
Fatigue failure of materials under broad band random vibrations
NASA Technical Reports Server (NTRS)
Huang, T. C.; Lanz, R. W.
1971-01-01
The fatigue life of material under multifactor influence of broad band random excitations has been investigated. Parameters which affect the fatigue life are postulated to be peak stress, variance of stress and the natural frequency of the system. Experimental data were processed by the hybrid computer. Based on the experimental results and regression analysis a best predicting model has been found. All values of the experimental fatigue lives are within the 95% confidence intervals of the predicting equation.
Random sequences generation through optical measurements by phase-shifting interferometry
NASA Astrophysics Data System (ADS)
François, M.; Grosges, T.; Barchiesi, D.; Erra, R.; Cornet, A.
2012-04-01
The development of new techniques for producing random sequences with a high level of security is a challenging topic of research in modern cryptographics. The proposed method is based on the measurement by phase-shifting interferometry of the speckle signals of the interaction between light and structures. We show how the combination of amplitude and phase distributions (maps) under a numerical process can produce random sequences. The produced sequences satisfy all the statistical requirements of randomness and can be used in cryptographic schemes.
NASA Technical Reports Server (NTRS)
Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)
2002-01-01
Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.
An invariance property of generalized Pearson random walks in bounded geometries
NASA Astrophysics Data System (ADS)
Mazzolo, Alain
2009-03-01
Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.
The Coalescent Process in Models with Selection
Kaplan, N. L.; Darden, T.; Hudson, R. R.
1988-01-01
Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685
Interplay of Determinism and Randomness: From Irreversibility to Chaos, Fractals, and Stochasticity
NASA Astrophysics Data System (ADS)
Tsonis, A.
2017-12-01
We will start our discussion into randomness by looking exclusively at our formal mathematical system to show that even in this pure and strictly logical system one cannot do away with randomness. By employing simple mathematical models, we will identify the three possible sources of randomness: randomness due to inability to find the rules (irreversibility), randomness due to inability to have infinite power (chaos), and randomness due to stochastic processes. Subsequently we will move from the mathematical system to our physical world to show that randomness, through the quantum mechanical character of small scales, through chaos, and because of the second law of thermodynamics, is an intrinsic property of nature as well. We will subsequently argue that the randomness in the physical world is consistent with the three sources of randomness suggested from the study of simple mathematical systems. Many examples ranging from purely mathematical to natural processes will be presented, which clearly demonstrate how the combination of rules and randomness produces the world we live in. Finally, the principle of least effort or the principle of minimum energy consumption will be suggested as the underlying principle behind this symbiosis between determinism and randomness.
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2017-06-01
Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.
USDA-ARS?s Scientific Manuscript database
Almond processing has been shown to differentially impact metabolizable energy; however, the effect of food form on the gastrointestinal microbiota is under-investigated. We aimed to assess the interrelationship of almond consumption and processing on the gastrointestinal microbiota. A controlled-fe...
Results from the Biology Concept Inventory (BCI), and what they mean for biogeoscience literacy.
NASA Astrophysics Data System (ADS)
Garvin-Doxas, K.; Klymkowsky, M.
2008-12-01
While researching the Biology Concept Inventory (BCI) we found that a wide class of student difficulties in genetics and molecular biology can be traced to deep-seated misconceptions about random processes and molecular interactions. Students believe that random processes are inefficient, while biological systems are very efficient, and are therefore quick to propose their own rational explanations for various processes (from diffusion to evolution). These rational explanations almost always make recourse to a driver (natural selection in genetics, or density gradients in molecular biology) with the process only taking place when the driver is present. The concept of underlying random processes that are taking place all the time giving rise to emergent behaviour is almost totally absent. Even students who have advanced or college physics, and can discuss diffusion correctly in that context, cannot make the transfer to biological processes. Furthermore, their understanding of molecular interactions is purely geometric, with a lock-and-key model (rather than an energy minimization model) that does not allow for the survival of slight variations of the "correct" molecule. Together with the dominant misconception about random processes, this results in a strong conceptual barrier in understanding evolutionary processes, and can frustrate the success of education programs.
Transient Oscilliations in Mechanical Systems of Automatic Control with Random Parameters
NASA Astrophysics Data System (ADS)
Royev, B.; Vinokur, A.; Kulikov, G.
2018-04-01
Transient oscillations in mechanical systems of automatic control with random parameters is a relevant but insufficiently studied issue. In this paper, a modified spectral method was applied to investigate the problem. The nature of dynamic processes and the phase portraits are analyzed depending on the amplitude and frequency of external influence. It is evident from the obtained results, that the dynamic phenomena occurring in the systems with random parameters under external influence are complex, and their study requires further investigation.
Price, John M.; Colflesh, Gregory J. H.; Cerella, John; Verhaeghen, Paul
2014-01-01
We investigated the effects of 10 hours of practice on variations of the N-Back task to investigate the processes underlying possible expansion of the focus of attention within working memory. Using subtractive logic, we showed that random access (i.e., Sternberg-like search) yielded a modest effect (a 50% increase in speed) whereas the processes of forward access (i.e., retrieval in order, as in a standard N-Back task) and updating (i.e., changing the contents of working memory) were executed about 5 times faster after extended practice. We additionally found that extended practice increased working memory capacity as measured by the size of the focus of attention for the forward-access task, but not for variations where probing was in random order. This suggests that working memory capacity may depend on the type of search process engaged, and that certain working-memory-related cognitive processes are more amenable to practice than others. PMID:24486803
Applying the Anderson-Darling test to suicide clusters: evidence of contagion at U. S. universities?
MacKenzie, Donald W
2013-01-01
Suicide clusters at Cornell University and the Massachusetts Institute of Technology (MIT) prompted popular and expert speculation of suicide contagion. However, some clustering is to be expected in any random process. This work tested whether suicide clusters at these two universities differed significantly from those expected under a homogeneous Poisson process, in which suicides occur randomly and independently of one another. Suicide dates were collected for MIT and Cornell for 1990-2012. The Anderson-Darling statistic was used to test the goodness-of-fit of the intervals between suicides to distribution expected under the Poisson process. Suicides at MIT were consistent with the homogeneous Poisson process, while those at Cornell showed clustering inconsistent with such a process (p = .05). The Anderson-Darling test provides a statistically powerful means to identify suicide clustering in small samples. Practitioners can use this method to test for clustering in relevant communities. The difference in clustering behavior between the two institutions suggests that more institutions should be studied to determine the prevalence of suicide clustering in universities and its causes.
Randomized central limit theorems: A unified theory.
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Randomized central limit theorems: A unified theory
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
NullSeq: A Tool for Generating Random Coding Sequences with Desired Amino Acid and GC Contents.
Liu, Sophia S; Hockenberry, Adam J; Lancichinetti, Andrea; Jewett, Michael C; Amaral, Luís A N
2016-11-01
The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. In order to accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. While many tools have been developed to create random nucleotide sequences, protein coding sequences are subject to a unique set of constraints that complicates the process of generating appropriate null models. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content for the purpose of hypothesis testing. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content, which we have developed into a python package. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. Furthermore, this approach can easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes as well as more effective engineering of biological systems.
A simple approach to nonlinear estimation of physical systems
Christakos, G.
1988-01-01
Recursive algorithms for estimating the states of nonlinear physical systems are developed. This requires some key hypotheses regarding the structure of the underlying processes. Members of this class of random processes have several desirable properties for the nonlinear estimation of random signals. An assumption is made about the form of the estimator, which may then take account of a wide range of applications. Under the above assumption, the estimation algorithm is mathematically suboptimal but effective and computationally attractive. It may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. To link theory with practice, some numerical results for a simulated system are presented, in which the responses from the proposed and the extended Kalman algorithms are compared. ?? 1988.
Black-Scholes model under subordination
NASA Astrophysics Data System (ADS)
Stanislavsky, A. A.
2003-02-01
In this paper, we consider a new mathematical extension of the Black-Scholes (BS) model in which the stochastic time and stock share price evolution is described by two independent random processes. The parent process is Brownian, and the directing process is inverse to the totally skewed, strictly α-stable process. The subordinated process represents the Brownian motion indexed by an independent, continuous and increasing process. This allows us to introduce the long-term memory effects in the classical BS model.
NASA Astrophysics Data System (ADS)
Pospisil, J.; Jakubik, P.; Machala, L.
2005-11-01
This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.
Random-order fractional bistable system and its stochastic resonance
NASA Astrophysics Data System (ADS)
Gao, Shilong; Zhang, Li; Liu, Hui; Kan, Bixia
2017-01-01
In this paper, the diffusion motion of Brownian particles in a viscous liquid suffering from stochastic fluctuations of the external environment is modeled as a random-order fractional bistable equation, and as a typical nonlinear dynamic behavior, the stochastic resonance phenomena in this system are investigated. At first, the derivation process of the random-order fractional bistable system is given. In particular, the random-power-law memory is deeply discussed to obtain the physical interpretation of the random-order fractional derivative. Secondly, the stochastic resonance evoked by random-order and external periodic force is mainly studied by numerical simulation. In particular, the frequency shifting phenomena of the periodical output are observed in SR induced by the excitation of the random order. Finally, the stochastic resonance of the system under the double stochastic excitations of the random order and the internal color noise is also investigated.
Price, John M; Colflesh, Gregory J H; Cerella, John; Verhaeghen, Paul
2014-05-01
We investigated the effects of 10h of practice on variations of the N-Back task to investigate the processes underlying possible expansion of the focus of attention within working memory. Using subtractive logic, we showed that random access (i.e., Sternberg-like search) yielded a modest effect (a 50% increase in speed) whereas the processes of forward access (i.e., retrieval in order, as in a standard N-Back task) and updating (i.e., changing the contents of working memory) were executed about 5 times faster after extended practice. We additionally found that extended practice increased working memory capacity as measured by the size of the focus of attention for the forward-access task, but not for variations where probing was in random order. This suggests that working memory capacity may depend on the type of search process engaged, and that certain working-memory-related cognitive processes are more amenable to practice than others. Copyright © 2014 Elsevier B.V. All rights reserved.
Moore, Sarah J; Herst, Patries M; Louwe, Robert J W
2018-05-01
A remarkable improvement in patient positioning was observed after the implementation of various process changes aiming to increase the consistency of patient positioning throughout the radiotherapy treatment chain. However, no tool was available to describe these changes over time in a standardised way. This study reports on the feasibility of Statistical Process Control (SPC) to highlight changes in patient positioning accuracy and facilitate correlation of these changes with the underlying process changes. Metrics were designed to quantify the systematic and random patient deformation as input for the SPC charts. These metrics were based on data obtained from multiple local ROI matches for 191 patients who were treated for head-and-neck cancer during the period 2011-2016. SPC highlighted a significant improvement in patient positioning that coincided with multiple intentional process changes. The observed improvements could be described as a combination of a reduction in outliers and a systematic improvement in the patient positioning accuracy of all patients. SPC is able to track changes in the reproducibility of patient positioning in head-and-neck radiation oncology, and distinguish between systematic and random process changes. Identification of process changes underlying these trends requires additional statistical analysis and seems only possible when the changes do not overlap in time. Copyright © 2018 Elsevier B.V. All rights reserved.
Impact of self-healing capability on network robustness
NASA Astrophysics Data System (ADS)
Shang, Yilun
2015-04-01
A wide spectrum of real-life systems ranging from neurons to botnets display spontaneous recovery ability. Using the generating function formalism applied to static uncorrelated random networks with arbitrary degree distributions, the microscopic mechanism underlying the depreciation-recovery process is characterized and the effect of varying self-healing capability on network robustness is revealed. It is found that the self-healing capability of nodes has a profound impact on the phase transition in the emergence of percolating clusters, and that salient difference exists in upholding network integrity under random failures and intentional attacks. The results provide a theoretical framework for quantitatively understanding the self-healing phenomenon in varied complex systems.
Impact of self-healing capability on network robustness.
Shang, Yilun
2015-04-01
A wide spectrum of real-life systems ranging from neurons to botnets display spontaneous recovery ability. Using the generating function formalism applied to static uncorrelated random networks with arbitrary degree distributions, the microscopic mechanism underlying the depreciation-recovery process is characterized and the effect of varying self-healing capability on network robustness is revealed. It is found that the self-healing capability of nodes has a profound impact on the phase transition in the emergence of percolating clusters, and that salient difference exists in upholding network integrity under random failures and intentional attacks. The results provide a theoretical framework for quantitatively understanding the self-healing phenomenon in varied complex systems.
ERIC Educational Resources Information Center
Savolainen, Reijo
2015-01-01
Introduction: The article contributes to the conceptual studies of affective factors in information seeking by examining Kuhlthau's information search process model. Method: This random-digit dial telephone survey of 253 people (75% female) living in a rural, medically under-serviced area of Ontario, Canada, follows-up a previous interview study…
Direct generation of all-optical random numbers from optical pulse amplitude chaos.
Li, Pu; Wang, Yun-Cai; Wang, An-Bang; Yang, Ling-Zhen; Zhang, Ming-Jiang; Zhang, Jian-Zhong
2012-02-13
We propose and theoretically demonstrate an all-optical method for directly generating all-optical random numbers from pulse amplitude chaos produced by a mode-locked fiber ring laser. Under an appropriate pump intensity, the mode-locked laser can experience a quasi-periodic route to chaos. Such a chaos consists of a stream of pulses with a fixed repetition frequency but random intensities. In this method, we do not require sampling procedure and external triggered clocks but directly quantize the chaotic pulses stream into random number sequence via an all-optical flip-flop. Moreover, our simulation results show that the pulse amplitude chaos has no periodicity and possesses a highly symmetric distribution of amplitude. Thus, in theory, the obtained random number sequence without post-processing has a high-quality randomness verified by industry-standard statistical tests.
Xu, Long; Zhao, Hua; Xu, Caixia; Zhang, Siqi; Zou, Yingyin K; Zhang, Jingwen
2014-02-01
A broadband optical amplification was observed and investigated in Er3+-doped electrostrictive ceramics of lanthanum-modified lead zirconate titanate under a corona atmosphere. The ceramic structure change caused by UV light, electric field, and random walks originated from the diffusive process in intrinsically disordered materials may all contribute to the optical amplification and the associated energy storage. Discussion based on optical energy storage and diffusive equations was given to explain the findings. Those experiments performed made it possible to study random walks and optical amplification in transparent ceramics materials.
NASA Astrophysics Data System (ADS)
Liu, Hong-Tao; Yang, Bao-He; Lv, Hang-Bing; Xu, Xiao-Xin; Luo, Qing; Wang, Guo-Ming; Zhang, Mei-Yun; Long, Shi-Bing; Liu, Qi; Liu, Ming
2015-02-01
We investigate the effect of the formation process under pulse and dc modes on the performance of one transistor and one resistor (1T1R) resistance random access memory (RRAM) device. All the devices are operated under the same test conditions, except for the initial formation process with different modes. Based on the statistical results, the high resistance state (HRS) under the dc forming mode shows a lower value with better distribution compared with that under the pulse mode. One of the possible reasons for such a phenomenon originates from different properties of conductive filament (CF) formed in the resistive switching layer under two different modes. For the dc forming mode, the formed filament is thought to be continuous, which is hard to be ruptured, resulting in a lower HRS. However, in the case of pulse forming, the filament is discontinuous where the transport mechanism is governed by hopping. The low resistance state (LRS) can be easily changed by removing a few trapping states from the conducting path. Hence, a higher HRS is thus observed. However, the HRS resistance is highly dependent on the length of the gap opened. A slight variation of the gap length will cause wide dispersion of resistance.
Synchronization invariance under network structural transformations
NASA Astrophysics Data System (ADS)
Arola-Fernández, Lluís; Díaz-Guilera, Albert; Arenas, Alex
2018-06-01
Synchronization processes are ubiquitous despite the many connectivity patterns that complex systems can show. Usually, the emergence of synchrony is a macroscopic observable; however, the microscopic details of the system, as, e.g., the underlying network of interactions, is many times partially or totally unknown. We already know that different interaction structures can give rise to a common functionality, understood as a common macroscopic observable. Building upon this fact, here we propose network transformations that keep the collective behavior of a large system of Kuramoto oscillators invariant. We derive a method based on information theory principles, that allows us to adjust the weights of the structural interactions to map random homogeneous in-degree networks into random heterogeneous networks and vice versa, keeping synchronization values invariant. The results of the proposed transformations reveal an interesting principle; heterogeneous networks can be mapped to homogeneous ones with local information, but the reverse process needs to exploit higher-order information. The formalism provides analytical insight to tackle real complex scenarios when dealing with uncertainty in the measurements of the underlying connectivity structure.
NASA Astrophysics Data System (ADS)
Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang
2016-04-01
This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.
Research on photodiode detector-based spatial transient light detection and processing system
NASA Astrophysics Data System (ADS)
Liu, Meiying; Wang, Hu; Liu, Yang; Zhao, Hui; Nan, Meng
2016-10-01
In order to realize real-time signal identification and processing of spatial transient light, the features and the energy of the captured target light signal are first described and quantitatively calculated. Considering that the transient light signal has random occurrence, a short duration and an evident beginning and ending, a photodiode detector based spatial transient light detection and processing system is proposed and designed in this paper. This system has a large field of view and is used to realize non-imaging energy detection of random, transient and weak point target under complex background of spatial environment. Weak signal extraction under strong background is difficult. In this paper, considering that the background signal changes slowly and the target signal changes quickly, filter is adopted for signal's background subtraction. A variable speed sampling is realized by the way of sampling data points with a gradually increased interval. The two dilemmas that real-time processing of large amount of data and power consumption required by the large amount of data needed to be stored are solved. The test results with self-made simulative signal demonstrate the effectiveness of the design scheme. The practical system could be operated reliably. The detection and processing of the target signal under the strong sunlight background was realized. The results indicate that the system can realize real-time detection of target signal's characteristic waveform and monitor the system working parameters. The prototype design could be used in a variety of engineering applications.
A Perron-Frobenius Type of Theorem for Quantum Operations
NASA Astrophysics Data System (ADS)
Lagro, Matthew; Yang, Wei-Shih; Xiong, Sheng
2017-10-01
We define a special class of quantum operations we call Markovian and show that it has the same spectral properties as a corresponding Markov chain. We then consider a convex combination of a quantum operation and a Markovian quantum operation and show that under a norm condition its spectrum has the same properties as in the conclusion of the Perron-Frobenius theorem if its Markovian part does. Moreover, under a compatibility condition of the two operations, we show that its limiting distribution is the same as the corresponding Markov chain. We apply our general results to partially decoherent quantum random walks with decoherence strength 0 ≤ p ≤ 1. We obtain a quantum ergodic theorem for partially decoherent processes. We show that for 0 < p ≤ 1, the limiting distribution of a partially decoherent quantum random walk is the same as the limiting distribution for the classical random walk.
Analysis of randomly time varying systems by gaussian closure technique
NASA Astrophysics Data System (ADS)
Dash, P. K.; Iyengar, R. N.
1982-07-01
The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.
Recommendations and illustrations for the evaluation of photonic random number generators
NASA Astrophysics Data System (ADS)
Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi
2017-09-01
The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.
Diffusion in randomly perturbed dissipative dynamics
NASA Astrophysics Data System (ADS)
Rodrigues, Christian S.; Chechkin, Aleksei V.; de Moura, Alessandro P. S.; Grebogi, Celso; Klages, Rainer
2014-11-01
Dynamical systems having many coexisting attractors present interesting properties from both fundamental theoretical and modelling points of view. When such dynamics is under bounded random perturbations, the basins of attraction are no longer invariant and there is the possibility of transport among them. Here we introduce a basic theoretical setting which enables us to study this hopping process from the perspective of anomalous transport using the concept of a random dynamical system with holes. We apply it to a simple model by investigating the role of hyperbolicity for the transport among basins. We show numerically that our system exhibits non-Gaussian position distributions, power-law escape times, and subdiffusion. Our simulation results are reproduced consistently from stochastic continuous time random walk theory.
Fragmentation under the Scaling Symmetry and Turbulent Cascade with Intermittency
NASA Technical Reports Server (NTRS)
Gorokhovski, M.
2003-01-01
Fragmentation plays an important role in a variety of physical, chemical, and geological processes. Examples include atomization in sprays, crushing of rocks, explosion and impact of solids, polymer degradation, etc. Although each individual action of fragmentation is a complex process, the number of these elementary actions is large. It is natural to abstract a simple 'effective' scenario of fragmentation and to represent its essential features. One of the models is the fragmentation under the scaling symmetry: each breakup action reduces the typical length of fragments, r (right arrow) alpha r, by an independent random multiplier alpha (0 < alpha < 1), which is governed by the fragmentation intensity spectrum q(alpha), integral(sup 1)(sub 0) q(alpha)d alpha = 1. This scenario has been proposed by Kolmogorov (1941), when he considered the breakup of solid carbon particle. Describing the breakup as a random discrete process, Kolmogorov stated that at latest times, such a process leads to the log-normal distribution. In Gorokhovski & Saveliev, the fragmentation under the scaling symmetry has been reviewed as a continuous evolution process with new features established. The objective of this paper is twofold. First, the paper synthesizes and completes theoretical part of Gorokhovski & Saveliev. Second, the paper shows a new application of the fragmentation theory under the scale invariance. This application concerns the turbulent cascade with intermittency. We formulate here a model describing the evolution of the velocity increment distribution along the progressively decreasing length scale. The model shows that when the turbulent length scale gets smaller, the velocity increment distribution has central growing peak and develops stretched tails. The intermittency in turbulence is manifested in the same way: large fluctuations of velocity provoke highest strain in narrow (dissipative) regions of flow.
NASA Astrophysics Data System (ADS)
Hu, D. L.; Liu, X. B.
Both periodic loading and random forces commonly co-exist in real engineering applications. However, the dynamic behavior, especially dynamic stability of systems under parametric periodic and random excitations has been reported little in the literature. In this study, the moment Lyapunov exponent and stochastic stability of binary airfoil under combined harmonic and non-Gaussian colored noise excitations are investigated. The noise is simplified to an Ornstein-Uhlenbeck process by applying the path-integral method. Via the singular perturbation method, the second-order expansions of the moment Lyapunov exponent are obtained, which agree well with the results obtained by the Monte Carlo simulation. Finally, the effects of the noise and parametric resonance (such as subharmonic resonance and combination additive resonance) on the stochastic stability of the binary airfoil system are discussed.
Transcription, intercellular variability and correlated random walk.
Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar
2008-11-01
We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.
Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2018-01-01
A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.
On a Stochastic Failure Model under Random Shocks
NASA Astrophysics Data System (ADS)
Cha, Ji Hwan
2013-02-01
In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.
Percolation, sliding, localization and relaxation in topologically closed circuits
NASA Astrophysics Data System (ADS)
Hurowitz, Daniel; Cohen, Doron
2016-03-01
Considering a random walk in a random environment in a topologically closed circuit, we explore the implications of the percolation and sliding transitions for its relaxation modes. A complementary question regarding the “delocalization” of eigenstates of non-hermitian Hamiltonians has been addressed by Hatano, Nelson, and followers. But we show that for a conservative stochastic process the implied spectral properties are dramatically different. In particular we determine the threshold for under-damped relaxation, and observe “complexity saturation” as the bias is increased.
An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling
NASA Astrophysics Data System (ADS)
Qiu, X. N.; Lau, H. Y. K.
The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Hagberg, Aric; Hengartner, Nick
We analyze component evolution in general random intersection graphs (RIGs) and give conditions on existence and uniqueness of the giant component. Our techniques generalize the existing methods for analysis on component evolution in RIGs. That is, we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts from the study on component evolution in Erdos-Renyi graphs. The main challenge becomes from the underlying structure of RIGs, when the number of offsprings follows a binomial distribution with a different number of nodes andmore » different rate at each step during the evolution. RIGs can be interpreted as a model for large randomly formed non-metric data sets. Besides the mathematical analysis on component evolution, which we provide in this work, we perceive RIGs as an important random structure which has already found applications in social networks, epidemic networks, blog readership, or wireless sensor networks.« less
Garvin-Doxas, Kathy
2008-01-01
While researching student assumptions for the development of the Biology Concept Inventory (BCI; http://bioliteracy.net), we found that a wide class of student difficulties in molecular and evolutionary biology appears to be based on deep-seated, and often unaddressed, misconceptions about random processes. Data were based on more than 500 open-ended (primarily) college student responses, submitted online and analyzed through our Ed's Tools system, together with 28 thematic and think-aloud interviews with students, and the responses of students in introductory and advanced courses to questions on the BCI. Students believe that random processes are inefficient, whereas biological systems are very efficient. They are therefore quick to propose their own rational explanations for various processes, from diffusion to evolution. These rational explanations almost always make recourse to a driver, e.g., natural selection in evolution or concentration gradients in molecular biology, with the process taking place only when the driver is present, and ceasing when the driver is absent. For example, most students believe that diffusion only takes place when there is a concentration gradient, and that the mutational processes that change organisms occur only in response to natural selection pressures. An understanding that random processes take place all the time and can give rise to complex and often counterintuitive behaviors is almost totally absent. Even students who have had advanced or college physics, and can discuss diffusion correctly in that context, cannot make the transfer to biological processes, and passing through multiple conventional biology courses appears to have little effect on their underlying beliefs. PMID:18519614
Random ambience using high fidelity images
NASA Astrophysics Data System (ADS)
Abu, Nur Azman; Sahib, Shahrin
2011-06-01
Most of the secure communication nowadays mandates true random keys as an input. These operations are mostly designed and taken care of by the developers of the cryptosystem. Due to the nature of confidential crypto development today, pseudorandom keys are typically designed and still preferred by the developers of the cryptosystem. However, these pseudorandom keys are predictable, periodic and repeatable, hence they carry minimal entropy. True random keys are believed to be generated only via hardware random number generators. Careful statistical analysis is still required to have any confidence the process and apparatus generates numbers that are sufficiently random to suit the cryptographic use. In this underlying research, each moment in life is considered unique in itself. The random key is unique for the given moment generated by the user whenever he or she needs the random keys in practical secure communication. An ambience of high fidelity digital image shall be tested for its randomness according to the NIST Statistical Test Suite. Recommendation on generating a simple 4 megabits per second random cryptographic keys live shall be reported.
Evidence for selective executive function deficits in ecstasy/polydrug users.
Fisk, J E; Montgomery, C
2009-01-01
Previous research has suggested that the separate aspects of executive functioning are differentially affected by ecstasy use. Although the inhibition process appears to be unaffected by ecstasy use, it is unclear whether this is true of heavy users under conditions of high demand. Tasks loading on the updating process have been shown to be adversely affected by ecstasy use. However, it remains unclear whether the deficits observed reflect the executive aspects of the tasks or whether they are domain general in nature affecting both verbal and visuo-spatial updating. Fourteen heavy ecstasy users (mean total lifetime use 1000 tablets), 39 light ecstasy users (mean total lifetime use 150 tablets) and 28 non-users were tested on tasks loading on the inhibition executive process (random letter generation) and the updating component process (letter updating, visuo-spatial updating and computation span). Heavy users were not impaired in random letter generation even under conditions designed to be more demanding. Ecstasy-related deficits were observed on all updating measures and were statistically significant for two of the three measures. Following controls for various aspects of cannabis use, statistically significant ecstasy-related deficits were obtained on all three updating measures. It was concluded that the inhibition process is unaffected by ecstasy use even among heavy users. By way of contrast, the updating process appears to be impaired in ecstasy users with the deficit apparently domain general in nature.
X-ray microtomography study of the compaction process of rods under tapping.
Fu, Yang; Xi, Yan; Cao, Yixin; Wang, Yujie
2012-05-01
We present an x-ray microtomography study of the compaction process of cylindrical rods under tapping. The process is monitored by measuring the evolution of the orientational order parameter, local, and overall packing densities as a function of the tapping number for different tapping intensities. The slow relaxation dynamics of the orientational order parameter can be well fitted with a stretched-exponential law with stretching exponents ranging from 0.9 to 1.6. The corresponding relaxation time versus tapping intensity follows an Arrhenius behavior which is reminiscent of the slow dynamics in thermal glassy systems. We also investigated the boundary effect on the ordering process and found that boundary rods order faster than interior ones. In searching for the underlying mechanism of the slow dynamics, we estimated the initial random velocities of the rods under tapping and found that the ordering process is compatible with a diffusion mechanism. The average coordination number as a function of the tapping number at different tapping intensities has also been measured, which spans a range from 6 to 8.
Quantum-like Viewpoint on the Complexity and Randomness of the Financial Market
NASA Astrophysics Data System (ADS)
Choustova, Olga
In economics and financial theory, analysts use random walk and more general martingale techniques to model behavior of asset prices, in particular share prices on stock markets, currency exchange rates and commodity prices. This practice has its basis in the presumption that investors act rationally and without bias, and that at any moment they estimate the value of an asset based on future expectations. Under these conditions, all existing information affects the price, which changes only when new information comes out. By definition, new information appears randomly and influences the asset price randomly. Corresponding continuous time models are based on stochastic processes (this approach was initiated in the thesis of [4]), see, e.g., the books of [33] and [37] for historical and mathematical details.
Robustness of Controllability for Networks Based on Edge-Attack
Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong
2014-01-01
We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components. PMID:24586507
Robustness of controllability for networks based on edge-attack.
Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong
2014-01-01
We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components.
Mid-infrared optical parametric oscillator pumped by an amplified random fiber laser
NASA Astrophysics Data System (ADS)
Shang, Yaping; Shen, Meili; Wang, Peng; Li, Xiao; Xu, Xiaojun
2017-01-01
Recently, the concept of random fiber lasers has attracted a great deal of attention for its feature to generate incoherent light without a traditional laser resonator, which is free of mode competition and insure the stationary narrow-band continuous modeless spectrum. In this Letter, we reported the first, to the best of our knowledge, optical parametric oscillator (OPO) pumped by an amplified 1070 nm random fiber laser (RFL), in order to generate stationary mid-infrared (mid-IR) laser. The experiment realized a watt-level laser output in the mid-IR range and operated relatively stable. The use of the RFL seed source allowed us to take advantage of its respective stable time-domain characteristics. The beam profile, spectrum and time-domain properties of the signal light were measured to analyze the process of frequency down-conversion process under this new pumping condition. The results suggested that the near-infrared (near-IR) signal light `inherited' good beam performances from the pump light. Those would be benefit for further develop about optical parametric process based on different pumping circumstances.
Self-Similar Random Process and Chaotic Behavior In Serrated Flow of High Entropy Alloys
Chen, Shuying; Yu, Liping; Ren, Jingli; Xie, Xie; Li, Xueping; Xu, Ying; Zhao, Guangfeng; Li, Peizhen; Yang, Fuqian; Ren, Yang; Liaw, Peter K.
2016-01-01
The statistical and dynamic analyses of the serrated-flow behavior in the nanoindentation of a high-entropy alloy, Al0.5CoCrCuFeNi, at various holding times and temperatures, are performed to reveal the hidden order associated with the seemingly-irregular intermittent flow. Two distinct types of dynamics are identified in the high-entropy alloy, which are based on the chaotic time-series, approximate entropy, fractal dimension, and Hurst exponent. The dynamic plastic behavior at both room temperature and 200 °C exhibits a positive Lyapunov exponent, suggesting that the underlying dynamics is chaotic. The fractal dimension of the indentation depth increases with the increase of temperature, and there is an inflection at the holding time of 10 s at the same temperature. A large fractal dimension suggests the concurrent nucleation of a large number of slip bands. In particular, for the indentation with the holding time of 10 s at room temperature, the slip process evolves as a self-similar random process with a weak negative correlation similar to a random walk. PMID:27435922
Semiparametric Bayesian classification with longitudinal markers
De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter
2013-01-01
Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871
Self-similar random process and chaotic behavior in serrated flow of high entropy alloys
Chen, Shuying; Yu, Liping; Ren, Jingli; ...
2016-07-20
Here, the statistical and dynamic analyses of the serrated-flow behavior in the nanoindentation of a high-entropy alloy, Al 0.5CoCrCuFeNi, at various holding times and temperatures, are performed to reveal the hidden order associated with the seemingly-irregular intermittent flow. Two distinct types of dynamics are identified in the high-entropy alloy, which are based on the chaotic time-series, approximate entropy, fractal dimension, and Hurst exponent. The dynamic plastic behavior at both room temperature and 200 °C exhibits a positive Lyapunov exponent, suggesting that the underlying dynamics is chaotic. The fractal dimension of the indentation depth increases with the increase of temperature, andmore » there is an inflection at the holding time of 10 s at the same temperature. A large fractal dimension suggests the concurrent nucleation of a large number of slip bands. In particular, for the indentation with the holding time of 10 s at room temperature, the slip process evolves as a self-similar random process with a weak negative correlation similar to a random walk.« less
Self-Similar Random Process and Chaotic Behavior In Serrated Flow of High Entropy Alloys.
Chen, Shuying; Yu, Liping; Ren, Jingli; Xie, Xie; Li, Xueping; Xu, Ying; Zhao, Guangfeng; Li, Peizhen; Yang, Fuqian; Ren, Yang; Liaw, Peter K
2016-07-20
The statistical and dynamic analyses of the serrated-flow behavior in the nanoindentation of a high-entropy alloy, Al0.5CoCrCuFeNi, at various holding times and temperatures, are performed to reveal the hidden order associated with the seemingly-irregular intermittent flow. Two distinct types of dynamics are identified in the high-entropy alloy, which are based on the chaotic time-series, approximate entropy, fractal dimension, and Hurst exponent. The dynamic plastic behavior at both room temperature and 200 °C exhibits a positive Lyapunov exponent, suggesting that the underlying dynamics is chaotic. The fractal dimension of the indentation depth increases with the increase of temperature, and there is an inflection at the holding time of 10 s at the same temperature. A large fractal dimension suggests the concurrent nucleation of a large number of slip bands. In particular, for the indentation with the holding time of 10 s at room temperature, the slip process evolves as a self-similar random process with a weak negative correlation similar to a random walk.
Heo, Moonseong; Meissner, Paul; Litwin, Alain H; Arnsten, Julia H; McKee, M Diane; Karasz, Alison; McKinley, Paula; Rehm, Colin D; Chambers, Earle C; Yeh, Ming-Chin; Wylie-Rosett, Judith
2017-01-01
Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants' preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants' preference after randomization, which we call a "preference option randomized design (PORD)". In contrast to other preference designs, which ask whether or not participants consent to the assigned intervention after randomization, the crucial feature of preference option randomized design is its unique informed consent process before randomization. Specifically, the preference option randomized design consent process informs participants that they can opt out and switch to the other intervention only if after randomization they actively express the desire to do so. Participants who do not independently express explicit alternate preference or assent to the randomly assigned intervention are considered to not have an alternate preference. In sum, preference option randomized design intends to maximize retention, minimize possibility of forced assignment for any participants, and to maintain randomization by allowing participants with no or equal preference to represent random assignments. This design scheme enables to define five effects that are interconnected with each other through common design parameters-comparative, preference, selection, intent-to-treat, and overall/as-treated-to collectively guide decision making between interventions. Statistical power functions for testing all these effects are derived, and simulations verified the validity of the power functions under normal and binomial distributions.
Evaluation of Gas Phase Dispersion in Flotation under Predetermined Hydrodynamic Conditions
NASA Astrophysics Data System (ADS)
Młynarczykowska, Anna; Oleksik, Konrad; Tupek-Murowany, Klaudia
2018-03-01
Results of various investigations shows the relationship between the flotation parameters and gas distribution in a flotation cell. The size of gas bubbles is a random variable with a specific distribution. The analysis of this distribution is useful to make mathematical description of the flotation process. The flotation process depends on many variable factors. These are mainly occurrences like collision of single particle with gas bubble, adhesion of particle to the surface of bubble and detachment process. These factors are characterized by randomness. Because of that it is only possible to talk about the probability of occurence of one of these events which directly affects the speed of the process, thus a constant speed of flotation process. Probability of the bubble-particle collision in the flotation chamber with mechanical pulp agitation depends on the surface tension of the solution, air consumption, degree of pul aeration, energy dissipation and average feed particle size. Appropriate identification and description of the parameters of the dispersion of gas bubbles helps to complete the analysis of the flotation process in a specific physicochemical conditions and hydrodynamic for any raw material. The article presents the results of measurements and analysis of the gas phase dispersion by the size distribution of air bubbles in a flotation chamber under fixed hydrodynamic conditions. The tests were carried out in the Laboratory of Instrumental Methods in Department of Environmental Engineering and Mineral Processing, Faculty of Mining and Geoengineerin, AGH Univeristy of Science and Technology in Krakow.
Transpiration rates of rice plants treated with Trichoderma spp.
NASA Astrophysics Data System (ADS)
Doni, Febri; Anizan, I.; Che Radziah C. M., Z.; Yusoff, Wan Mohtar Wan
2014-09-01
Trichoderma spp. are considered as successful plant growth promoting fungi and have positive role in habitat engineering. In this study, the potential for Trichoderma spp. to regulate transpiration process in rice plant was assessed experimentally under greenhouse condition using a completely randomized design. The study revealed that Trichoderma spp. have potential to enhance growth of rice plant through transpirational processes. The results of the study add to the advancement of the understanding as to the role of Trichoderma spp. in improving rice physiological process.
Population density equations for stochastic processes with memory kernels
NASA Astrophysics Data System (ADS)
Lai, Yi Ming; de Kamps, Marc
2017-06-01
We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.
Otolith Trace Element Chemistry of Juvenile Black Rockfish
NASA Astrophysics Data System (ADS)
Hardin, W.; Bobko, S. J.; Jones, C. M.
2002-12-01
In the summer of 1997 we collected young-of -the-year (YOY) black rockfish, Sebastes melanops, from floating docks and seagrass beds in Newport and Coos Bay, Oregon. Otoliths were extracted from randomly selected fish, sectioned and polished under general laboratory conditions, and cleaned in a class 100 clean room. We used Laser Ablation - Inductively Coupled Mass Spectrometry (LA-ICPMS) to analyze elemental composition of the estuarine phase of the otoliths. While we observed differences in Mn/Ca ratios between the two estuaries, there was no statistical difference in otolith trace element chemistry ratios between estuaries using MANOVA. To determine if laboratory processing of otoliths might have impeded us from detecting differences in otolith chemistry, we conducted a second experiment. Right and left otoliths from 10 additional Coos Bay fish were randomly allocated to two processing methods. The first method was identical to our initial otolith processing, sectioning and polishing under normal laboratory conditions. In the second method, polishing was done in the clean room. For both methods otoliths went through a final cleaning in the clean room and analyzed with LA-ICPMS. While we did not detect statistical differences in element ratios between the two methods, otoliths polished outside the clean room had much higher variances. This increased variance might have lowered our ability to detect differences in otolith chemistry between estuaries. Based on our results, we recommend polishing otoliths under clean room conditions to reduce contamination.
Nonlinear random response prediction using MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.
1993-01-01
An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2015-01-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910
Remote sensing of Earth terrain
NASA Technical Reports Server (NTRS)
Kong, Jin AU; Yueh, Herng-Aung
1990-01-01
The layered random medium model is used to investigate the fully polarimetric scattering of electromagnetic waves from vegetation. The vegetation canopy is modeled as an anisotropic random medium containing nonspherical scatterers with preferred alignment. The underlying medium is considered as a homogeneous half space. The scattering effect of the vegetation canopy are characterized by 3-D correlation functions with variances and correlation lengths respectively corresponding to the fluctuation strengths and the physical geometries of the scatterers. The strong fluctuation theory is used to calculate the anisotropic effective permittivity tensor of the random medium and the distorted Born approximation is then applied to obtain the covariance matrix which describes the fully polarimetric scattering properties of the vegetation field. This model accounts for all the interaction processes between the boundaries and the scatterers and includes all the coherent effects due to wave propagation in different directions such as the constructive and destructive interferences. For a vegetation canopy with low attenuation, the boundary between the vegetation and the underlying medium can give rise to significant coherent effects.
NASA Astrophysics Data System (ADS)
Dong, Siqun; Zhao, Dianli
2018-01-01
This paper studies the subcritical, near-critical and supercritical asymptotic behavior of a reversible random coagulation-fragmentation polymerization process as N → ∞, with the number of distinct ways to form a k-clusters from k units satisfying f(k) =(1 + o (1)) cr-ke-kαk-β, where 0 < α < 1 and β > 0. When the cluster size is small, its distribution is proved to converge to the Gaussian distribution. For the medium clusters, its distribution will converge to Poisson distribution in supercritical stage, and no large clusters exist in this stage. Furthermore, the largest length of polymers of size N is of order ln N in the subcritical stage under α ⩽ 1 / 2.
Forced oscillations of cracked beam under the stochastic cyclic loading
NASA Astrophysics Data System (ADS)
Matsko, I.; Javors'kyj, I.; Yuzefovych, R.; Zakrzewski, Z.
2018-05-01
An analysis of forced oscillations of cracked beam using statistical methods for periodically correlated random processes is presented. The oscillation realizations are obtained on the basis of numerical solutions of differential equations of the second order, for the case when applied force is described by a sum of harmonic and stationary random process. It is established that due to crack appearance forced oscillations acquire properties of second-order periodical non-stationarity. It is shown that in a super-resonance regime covariance and spectral characteristics, which describe non-stationary structure of forced oscillations, are more sensitive to crack growth than the characteristics of the oscillation's deterministic part. Using diagnostic indicators formed on their basis allows the detection of small cracks.
Fractional Stochastic Field Theory
NASA Astrophysics Data System (ADS)
Honkonen, Juha
2018-02-01
Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less
ERIC Educational Resources Information Center
Pustejovsky, James E.; Runyon, Christopher
2014-01-01
Direct observation recording procedures produce reductive summary measurements of an underlying stream of behavior. Previous methodological studies of these recording procedures have employed simulation methods for generating random behavior streams, many of which amount to special cases of a statistical model known as the alternating renewal…
Pervasive randomness in physics: an introduction to its modelling and spectral characterisation
NASA Astrophysics Data System (ADS)
Howard, Roy
2017-10-01
An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.
Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.
Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale
2016-08-01
Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. © The Author(s) 2016.
Evaluation of some random effects methodology applicable to bird ringing data
Burnham, K.P.; White, Gary C.
2002-01-01
Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.
The voluntary-threat approach to control nonpoint source pollution under uncertainty.
Li, Youping
2013-11-15
This paper extends the voluntary-threat approach of Segerson and Wu (2006) to the case that the ambient level of nonpoint source pollution is stochastic. It is shown that when the random component is bounded from the above, fine-tuning the cutoff value of the tax payments avoids the actual imposition of the tax while the threat of such payments retains necessary incentive for the polluters to engage in abatements at the optimal level. If the random component is not bounded, the imposition of the tax cannot be completely avoided but the probability can be reduced by setting a higher cutoff value. It is also noted that the regulator has additional flexibility in randomizing the tax imposition but the randomization process has to be credible. Copyright © 2013 Elsevier Ltd. All rights reserved.
Using Maximum Entropy to Find Patterns in Genomes
NASA Astrophysics Data System (ADS)
Liu, Sophia; Hockenberry, Adam; Lancichinetti, Andrea; Jewett, Michael; Amaral, Luis
The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. To accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. This approach can also be easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes. National Institute of General Medical Science, Northwestern University Presidential Fellowship, National Science Foundation, David and Lucile Packard Foundation, Camille Dreyfus Teacher Scholar Award.
Time distributions of solar energetic particle events: Are SEPEs really random?
NASA Astrophysics Data System (ADS)
Jiggens, P. T. A.; Gabriel, S. B.
2009-10-01
Solar energetic particle events (SEPEs) can exhibit flux increases of several orders of magnitude over background levels and have always been considered to be random in nature in statistical models with no dependence of any one event on the occurrence of previous events. We examine whether this assumption of randomness in time is correct. Engineering modeling of SEPEs is important to enable reliable and efficient design of both Earth-orbiting and interplanetary spacecraft and future manned missions to Mars and the Moon. All existing engineering models assume that the frequency of SEPEs follows a Poisson process. We present analysis of the event waiting times using alternative distributions described by Lévy and time-dependent Poisson processes and compared these with the usual Poisson distribution. The results show significant deviation from a Poisson process and indicate that the underlying physical processes might be more closely related to a Lévy-type process, suggesting that there is some inherent “memory” in the system. Inherent Poisson assumptions of stationarity and event independence are investigated, and it appears that they do not hold and can be dependent upon the event definition used. SEPEs appear to have some memory indicating that events are not completely random with activity levels varying even during solar active periods and are characterized by clusters of events. This could have significant ramifications for engineering models of the SEP environment, and it is recommended that current statistical engineering models of the SEP environment should be modified to incorporate long-term event dependency and short-term system memory.
Multiplicative processes in visual cognition
NASA Astrophysics Data System (ADS)
Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.
2014-03-01
The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.
Extended observability of linear time-invariant systems under recurrent loss of output data
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok; Halevi, Yoram
1989-01-01
Recurrent loss of sensor data in integrated control systems of an advanced aircraft may occur under different operating conditions that include detected frame errors and queue saturation in computer networks, and bad data suppression in signal processing. This paper presents an extension of the concept of observability based on a set of randomly selected nonconsecutive outputs in finite-dimensional, linear, time-invariant systems. Conditions for testing extended observability have been established.
Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp; Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp; Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it
2013-02-15
We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand itmore » considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).« less
Nonlinear Estimation of Discrete-Time Signals Under Random Observation Delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caballero-Aguila, R.; Jimenez-Lopez, J. D.; Hermoso-Carazo, A.
2008-11-06
This paper presents an approximation to the nonlinear least-squares estimation problem of discrete-time stochastic signals using nonlinear observations with additive white noise which can be randomly delayed by one sampling time. The observation delay is modelled by a sequence of independent Bernoulli random variables whose values, zero or one, indicate that the real observation arrives on time or it is delayed and, hence, the available measurement to estimate the signal is not up-to-date. Assuming that the state-space model generating the signal is unknown and only the covariance functions of the processes involved in the observation equation are ready for use,more » a filtering algorithm based on linear approximations of the real observations is proposed.« less
Flaugnacco, Elena; Lopez, Luisa; Terribili, Chiara; Montico, Marcella; Zoia, Stefania; Schön, Daniele
2015-01-01
There is some evidence for a role of music training in boosting phonological awareness, word segmentation, working memory, as well as reading abilities in children with typical development. Poor performance in tasks requiring temporal processing, rhythm perception and sensorimotor synchronization seems to be a crucial factor underlying dyslexia in children. Interestingly, children with dyslexia show deficits in temporal processing, both in language and in music. Within this framework, we test the hypothesis that music training, by improving temporal processing and rhythm abilities, improves phonological awareness and reading skills in children with dyslexia. The study is a prospective, multicenter, open randomized controlled trial, consisting of test, rehabilitation and re-test (ID NCT02316873). After rehabilitation, the music group (N = 24) performed better than the control group (N = 22) in tasks assessing rhythmic abilities, phonological awareness and reading skills. This is the first randomized control trial testing the effect of music training in enhancing phonological and reading abilities in children with dyslexia. The findings show that music training can modify reading and phonological abilities even when these skills are severely impaired. Through the enhancement of temporal processing and rhythmic skills, music might become an important tool in both remediation and early intervention programs. Trial Registration ClinicalTrials.gov NCT02316873 PMID:26407242
Flaugnacco, Elena; Lopez, Luisa; Terribili, Chiara; Montico, Marcella; Zoia, Stefania; Schön, Daniele
2015-01-01
There is some evidence for a role of music training in boosting phonological awareness, word segmentation, working memory, as well as reading abilities in children with typical development. Poor performance in tasks requiring temporal processing, rhythm perception and sensorimotor synchronization seems to be a crucial factor underlying dyslexia in children. Interestingly, children with dyslexia show deficits in temporal processing, both in language and in music. Within this framework, we test the hypothesis that music training, by improving temporal processing and rhythm abilities, improves phonological awareness and reading skills in children with dyslexia. The study is a prospective, multicenter, open randomized controlled trial, consisting of test, rehabilitation and re-test (ID NCT02316873). After rehabilitation, the music group (N = 24) performed better than the control group (N = 22) in tasks assessing rhythmic abilities, phonological awareness and reading skills. This is the first randomized control trial testing the effect of music training in enhancing phonological and reading abilities in children with dyslexia. The findings show that music training can modify reading and phonological abilities even when these skills are severely impaired. Through the enhancement of temporal processing and rhythmic skills, music might become an important tool in both remediation and early intervention programs.Trial Registration: ClinicalTrials.gov NCT02316873
78 FR 20320 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-04
...: select from for a random sample, get the survey to the appropriate respondent, and increase response rates. The survey will not be added to this package; instead, it will be processed under a different... Medicaid Services is requesting clearance for two surveys to aid in understanding levels of awareness and...
Adaptive economic and ecological forest management under risk
Joseph Buongiorno; Mo Zhou
2015-01-01
Background: Forest managers must deal with inherently stochastic ecological and economic processes. The future growth of trees is uncertain, and so is their value. The randomness of low-impact, high frequency or rare catastrophic shocks in forest growth has significant implications in shaping the mix of tree species and the forest landscape...
Nonparametric Bayesian predictive distributions for future order statistics
Richard A. Johnson; James W. Evans; David W. Green
1999-01-01
We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...
Rigorously testing multialternative decision field theory against random utility models.
Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg
2014-06-01
Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
NASA Astrophysics Data System (ADS)
Ji, Sungchul
A new mathematical formula referred to as the Planckian distribution equation (PDE) has been found to fit long-tailed histograms generated in various fields of studies, ranging from atomic physics to single-molecule enzymology, cell biology, brain neurobiology, glottometrics, econophysics, and to cosmology. PDE can be derived from a Gaussian-like equation (GLE) by non-linearly transforming its variable, x, while keeping the y coordinate constant. Assuming that GLE represents a random distribution (due to its symmetry), it is possible to define a binary logarithm of the ratio between the areas under the curves of PDE and GLE as a measure of the non-randomness (or order) underlying the biophysicochemical processes generating long-tailed histograms that fit PDE. This new function has been named the Planckian information, IP, which (i) may be a new measure of order that can be applied widely to both natural and human sciences and (ii) can serve as the opposite of the Boltzmann-Gibbs entropy, S, which is a measure of disorder. The possible rationales for the universality of PDE may include (i) the universality of the wave-particle duality embedded in PDE, (ii) the selection of subsets of random processes (thereby breaking the symmetry of GLE) as the basic mechanism of generating order, organization, and function, and (iii) the quantity-quality complementarity as the connection between PDE and Peircean semiotics.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Das, Enny; Vonkeman, Charlotte; Hartmann, Tilo
2012-01-01
An experimental study tested the effects of positive and negative mood on the processing and acceptance of health recommendations about smoking in an online experiment. It was hypothesised that positive mood would provide smokers with the resources to systematically process self-relevant health recommendations. One hundred and twenty-seven participants (smokers and non-smokers) read a message in which a quit smoking programme was recommended. Participants were randomly assigned to one of four conditions: positive versus negative mood, and strong versus weak arguments for the recommended action. Systematic message processing was inferred when participants were able to distinguish between high- and low-quality arguments, and by congruence between attitudes and behavioural intentions. Persuasion was measured by participant's attitudes towards smoking and the recommended action, and by their intentions to follow the action recommendation. As predicted, smokers systematically processed the health message only under positive mood conditions; non-smokers systematically processed the health message only under negative mood conditions. Moreover, smokers' attitudes towards the health message predicted intentions to quit smoking only under positive mood conditions. Findings suggest that positive mood may decrease defensive processing of self-relevant health information.
Reliability of Space-Shuttle Pressure Vessels with Random Batch Effects
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Kulkarni, Pandurang M.
2000-01-01
In this article we revisit the problem of estimating the joint reliability against failure by stress rupture of a group of fiber-wrapped pressure vessels used on Space-Shuttle missions. The available test data were obtained from an experiment conducted at the U.S. Department of Energy Lawrence Livermore Laboratory (LLL) in which scaled-down vessels were subjected to life testing at four accelerated levels of pressure. We estimate the reliability assuming that both the Shuttle and LLL vessels were chosen at random in a two-stage process from an infinite population with spools of fiber as the primary sampling unit. Two main objectives of this work are: (1) to obtain practical estimates of reliability taking into account random spool effects and (2) to obtain a realistic assessment of estimation accuracy under the random model. Here, reliability is calculated in terms of a 'system' of 22 fiber-wrapped pressure vessels, taking into account typical pressures and exposure times experienced by Shuttle vessels. Comparisons are made with previous studies. The main conclusion of this study is that, although point estimates of reliability are still in the 'comfort zone,' it is advisable to plan for replacement of the pressure vessels well before the expected Lifetime of 100 missions per Shuttle Orbiter. Under a random-spool model, there is simply not enough information in the LLL data to provide reasonable assurance that such replacement would not be necessary.
Perception of randomness: On the time of streaks.
Sun, Yanlong; Wang, Hongbin
2010-12-01
People tend to think that streaks in random sequential events are rare and remarkable. When they actually encounter streaks, they tend to consider the underlying process as non-random. The present paper examines the time of pattern occurrences in sequences of Bernoulli trials, and shows that among all patterns of the same length, a streak is the most delayed pattern for its first occurrence. It is argued that when time is of essence, how often a pattern is to occur (mean time, or, frequency) and when a pattern is to first occur (waiting time) are different questions and bear different psychological relevance. The waiting time statistics may provide a quantitative measure to the psychological distance when people are expecting a probabilistic event, and such measure is consistent with both of the representativeness and availability heuristics in people's perception of randomness. We discuss some of the recent empirical findings and suggest that people's judgment and generation of random sequences may be guided by their actual experiences of the waiting time statistics. Published by Elsevier Inc.
Dynamic speckle - Interferometry of micro-displacements
NASA Astrophysics Data System (ADS)
Vladimirov, A. P.
2012-06-01
The problem of the dynamics of speckles in the image plane of the object, caused by random movements of scattering centers is solved. We consider three cases: 1) during the observation the points move at random, but constant speeds, and 2) the relative displacement of any pair of points is a continuous random process, and 3) the motion of the centers is the sum of a deterministic movement and random displacement. For the cases 1) and 2) the characteristics of temporal and spectral autocorrelation function of the radiation intensity can be used for determining of individually and the average relative displacement of the centers, their dispersion and the relaxation time. For the case 3) is showed that under certain conditions, the optical signal contains a periodic component, the number of periods is proportional to the derivations of the deterministic displacements. The results of experiments conducted to test and application of theory are given.
NASA Astrophysics Data System (ADS)
Tanimoto, Jun
2016-11-01
Inspired by the commonly observed real-world fact that people tend to behave in a somewhat random manner after facing interim equilibrium to break a stalemate situation whilst seeking a higher output, we established two models of the spatial prisoner's dilemma. One presumes that an agent commits action errors, while the other assumes that an agent refers to a payoff matrix with an added random noise instead of an original payoff matrix. A numerical simulation revealed that mechanisms based on the annealing of randomness due to either the action error or the payoff noise could significantly enhance the cooperation fraction. In this study, we explain the detailed enhancement mechanism behind the two models by referring to the concepts that we previously presented with respect to evolutionary dynamic processes under the names of enduring and expanding periods.
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
NASA Astrophysics Data System (ADS)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Reduced Wiener Chaos representation of random fields via basis adaptation and projection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu; Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089; Ghanem, Roger G., E-mail: ghanem@usc.edu
2017-07-15
A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.
Fermi problem in disordered systems
NASA Astrophysics Data System (ADS)
Menezes, G.; Svaiter, N. F.; de Mello, H. R.; Zarro, C. A. D.
2017-10-01
We revisit the Fermi two-atom problem in the framework of disordered systems. In our model, we consider a two-qubit system linearly coupled with a quantum massless scalar field. We analyze the energy transfer between the qubits under different experimental perspectives. In addition, we assume that the coefficients of the Klein-Gordon equation are random functions of the spatial coordinates. The disordered medium is modeled by a centered, stationary, and Gaussian process. We demonstrate that the classical notion of causality emerges only in the wave zone in the presence of random fluctuations of the light cone. Possible repercussions are discussed.
Reduced Wiener Chaos representation of random fields via basis adaptation and projection
NASA Astrophysics Data System (ADS)
Tsilifis, Panagiotis; Ghanem, Roger G.
2017-07-01
A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.
Hanks, Ephraim M.; Schliep, Erin M.; Hooten, Mevin B.; Hoeting, Jennifer A.
2015-01-01
In spatial generalized linear mixed models (SGLMMs), covariates that are spatially smooth are often collinear with spatially smooth random effects. This phenomenon is known as spatial confounding and has been studied primarily in the case where the spatial support of the process being studied is discrete (e.g., areal spatial data). In this case, the most common approach suggested is restricted spatial regression (RSR) in which the spatial random effects are constrained to be orthogonal to the fixed effects. We consider spatial confounding and RSR in the geostatistical (continuous spatial support) setting. We show that RSR provides computational benefits relative to the confounded SGLMM, but that Bayesian credible intervals under RSR can be inappropriately narrow under model misspecification. We propose a posterior predictive approach to alleviating this potential problem and discuss the appropriateness of RSR in a variety of situations. We illustrate RSR and SGLMM approaches through simulation studies and an analysis of malaria frequencies in The Gambia, Africa.
Statistical modeling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1992-01-01
This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.
A geometric theory for Lévy distributions
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2014-08-01
Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts of the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.
A geometric theory for Lévy distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
2014-08-15
Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts ofmore » the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.« less
NASA Astrophysics Data System (ADS)
Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui
2018-04-01
Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.
Panmictic and Clonal Evolution on a Single Patchy Resource Produces Polymorphic Foraging Guilds
Getz, Wayne M.; Salter, Richard; Lyons, Andrew J.; Sippl-Swezey, Nicolas
2015-01-01
We develop a stochastic, agent-based model to study how genetic traits and experiential changes in the state of agents and available resources influence individuals’ foraging and movement behaviors. These behaviors are manifest as decisions on when to stay and exploit a current resource patch or move to a particular neighboring patch, based on information of the resource qualities of the patches and the anticipated level of intraspecific competition within patches. We use a genetic algorithm approach and an individual’s biomass as a fitness surrogate to explore the foraging strategy diversity of evolving guilds under clonal versus hermaphroditic sexual reproduction. We first present the resource exploitation processes, movement on cellular arrays, and genetic algorithm components of the model. We then discuss their implementation on the Nova software platform. This platform seamlessly combines the dynamical systems modeling of consumer-resource interactions with agent-based modeling of individuals moving over a landscapes, using an architecture that lays transparent the following four hierarchical simulation levels: 1.) within-patch consumer-resource dynamics, 2.) within-generation movement and competition mitigation processes, 3.) across-generation evolutionary processes, and 4.) multiple runs to generate the statistics needed for comparative analyses. The focus of our analysis is on the question of how the biomass production efficiency and the diversity of guilds of foraging strategy types, exploiting resources over a patchy landscape, evolve under clonal versus random hermaphroditic sexual reproduction. Our results indicate greater biomass production efficiency under clonal reproduction only at higher population densities, and demonstrate that polymorphisms evolve and are maintained under random mating systems. The latter result questions the notion that some type of associative mating structure is needed to maintain genetic polymorphisms among individuals exploiting a common patchy resource on an otherwise spatially homogeneous landscape. PMID:26274613
Effect of Cooling Rate on SCC Susceptibility of β-Processed Ti-6Al-4V Alloy in 0.6M NaCl Solution
NASA Astrophysics Data System (ADS)
Ahn, Soojin; Park, Jiho; Jeong, Daeho; Sung, Hyokyung; Kwon, Yongnam; Kim, Sangshik
2018-03-01
The effects of cooling rate on the stress corrosion cracking (SCC) susceptibility of β-processed Ti-6Al-4V (Ti64) alloy, including BA/S specimen with furnace cooling and BQ/S specimen with water quenching, were investigated in 0.6M NaCl solution under various applied potentials using a slow strain rate test technique. It was found that the SCC susceptibility of β-processed Ti64 alloy in aqueous NaCl solution decreased with fast cooling rate, which was particularly substantial under an anodic applied potential. The micrographic and fractographic analyses suggested that the enhancement with fast cooling rate was related to the random orientation of acicular α platelets in BQ/S specimen. Based on the experimental results, the effect of cooling rate on the SCC behavior of β-processed Ti64 alloy in aqueous NaCl solution was discussed.
2009-03-01
IN WIRELESS SENSOR NETWORKS WITH RANDOMLY DISTRIBUTED ELEMENTS UNDER MULTIPATH PROPAGATION CONDITIONS by Georgios Tsivgoulis March 2009...COVERED Engineer’s Thesis 4. TITLE Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation...the non-line-of-sight information. 15. NUMBER OF PAGES 111 14. SUBJECT TERMS Wireless Sensor Network , Direction of Arrival, DOA, Random
Decisionmaking under risk in invasive species management: risk management theory and applications
Shefali V. Mehta; Robert G. Haight; Frances R. Homans
2010-01-01
Invasive species management is closely entwined with the assessment and management of risk that arises from the inherently random nature of the invasion process. The theory and application of risk management for invasive species with an economic perspective is reviewed in this synthesis. Invasive species management can be delineated into three general categories:...
The Performance of Methods to Test Upper-Level Mediation in the Presence of Nonnormal Data
ERIC Educational Resources Information Center
Pituch, Keenan A.; Stapleton, Laura M.
2008-01-01
A Monte Carlo study compared the statistical performance of standard and robust multilevel mediation analysis methods to test indirect effects for a cluster randomized experimental design under various departures from normality. The performance of these methods was examined for an upper-level mediation process, where the indirect effect is a fixed…
Universal self-similarity of propagating populations
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-07-01
This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d -dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common—yet arbitrary—motion pattern; each particle has its own random propagation parameters—emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles’ displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles’ underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.
Universal self-similarity of propagating populations.
Eliazar, Iddo; Klafter, Joseph
2010-07-01
This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d-dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common--yet arbitrary--motion pattern; each particle has its own random propagation parameters--emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles' displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles' underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.
Spectral estimation of received phase in the presence of amplitude scintillation
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.
1988-01-01
A technique is demonstrated for obtaining the spectral parameters of the received carrier phase in the presence of carrier amplitude scintillation, by means of a digital phased locked loop. Since the random amplitude fluctuations generate time-varying loop characteristics, straightforward processing of the phase detector output does not provide accurate results. The method developed here performs a time-varying inverse filtering operation on the corrupted observables, thus recovering the original phase process and enabling accurate estimation of its underlying parameters.
Fractional Brownian motion and long term clinical trial recruitment
Zhang, Qiang; Lai, Dejian
2015-01-01
Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations. PMID:26347306
Fractional Brownian motion and long term clinical trial recruitment.
Zhang, Qiang; Lai, Dejian
2011-05-01
Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations.
NASA Astrophysics Data System (ADS)
Ismatkhodzhaev, S. K.; Kuzishchin, V. F.
2017-05-01
An automatic control system to control the thermal load (ACS) in a drum-type boiler under random fluctuations in the blast-furnace and coke-oven gas consumption rates and to control action on the natural gas consumption is considered. The system provides for use of a compensator by the basic disturbance, the blast-furnace gas consumption rate. To enhance the performance of the system, it is proposed to use more accurate mathematical second-order delay models of the channels of the object under control in combination with calculation by frequency methods of the controller parameters as well as determination of the structure and parameters of the compensator considering the statistical characteristics of the disturbances and using simulation. The statistical characteristics of the random blast-furnace gas consumption signal based on experimental data are provided. The random signal is presented in the form of the low-frequency (LF) and high-frequency (HF) components. The models of the correlation functions and spectral densities are developed. The article presents the results of calculating the optimal settings of the control loop with the controlled variable in the form of the "heat" signal with the restricted frequency variation index using three variants of the control performance criteria, viz., the linear and quadratic integral indices under step disturbance and the control error variance under random disturbance by the blastfurnace gas consumption rate. It is recommended to select a compensator designed in the form of series connection of two parts, one of which corresponds to the operator inverse to the transfer function of the PI controller, i.e., in the form of a really differentiating element. This facilitates the realization of the second part of the compensator by the invariance condition similar to transmitting the compensating signal to the object input. The results of simulation under random disturbance by the blast-furnace gas consumption are reported. Recommendations are made on the structure and parameters of the shaping filters for modeling the LF and HF components of the random signal. The results of the research may find applications in the systems to control the thermal processes with compensation of basic disturbances, in particular, in boilers for combustion of accompanying gases.
Transition in the decay rates of stationary distributions of Lévy motion in an energy landscape.
Kaleta, Kamil; Lőrinczi, József
2016-02-01
The time evolution of random variables with Lévy statistics has the ability to develop jumps, displaying very different behaviors from continuously fluctuating cases. Such patterns appear in an ever broadening range of examples including random lasers, non-Gaussian kinetics, or foraging strategies. The penalizing or reinforcing effect of the environment, however, has been little explored so far. We report a new phenomenon which manifests as a qualitative transition in the spatial decay behavior of the stationary measure of a jump process under an external potential, occurring on a combined change in the characteristics of the process and the lowest eigenvalue resulting from the effect of the potential. This also provides insight into the fundamental question of what is the mechanism of the spatial decay of a ground state.
Dickinson, Christopher A.; Zelinsky, Gregory J.
2013-01-01
Two experiments are reported that further explore the processes underlying dynamic search. In Experiment 1, observers’ oculomotor behavior was monitored while they searched for a randomly oriented T among oriented L distractors under static and dynamic viewing conditions. Despite similar search slopes, eye movements were less frequent and more spatially constrained under dynamic viewing relative to static, with misses also increasing more with target eccentricity in the dynamic condition. These patterns suggest that dynamic search involves a form of sit-and-wait strategy in which search is restricted to a small group of items surrounding fixation. To evaluate this interpretation, we developed a computational model of a sit-and-wait process hypothesized to underlie dynamic search. In Experiment 2 we tested this model by varying fixation position in the display and found that display positions optimized for a sit-and-wait strategy resulted in higher d′ values relative to a less optimal location. We conclude that different strategies, and therefore underlying processes, are used to search static and dynamic displays. PMID:23372555
Nonequivalence of updating rules in evolutionary games under high mutation rates.
Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Motes, Michael A; Yezhuvath, Uma S; Aslan, Sina; Spence, Jeffrey S; Rypma, Bart; Chapman, Sandra B
2018-02-01
Higher-order cognitive training has shown to enhance performance in older adults, but the neural mechanisms underlying performance enhancement have yet to be fully disambiguated. This randomized trial examined changes in processing speed and processing speed-related neural activity in older participants (57-71 years of age) who underwent cognitive training (CT, N = 12) compared with wait-listed (WLC, N = 15) or exercise-training active (AC, N = 14) controls. The cognitive training taught cognitive control functions of strategic attention, integrative reasoning, and innovation over 12 weeks. All 3 groups worked through a functional magnetic resonance imaging processing speed task during 3 sessions (baseline, mid-training, and post-training). Although all groups showed faster reaction times (RTs) across sessions, the CT group showed a significant increase, and the WLC and AC groups showed significant decreases across sessions in the association between RT and BOLD signal change within the left prefrontal cortex (PFC). Thus, cognitive training led to a change in processing speed-related neural activity where faster processing speed was associated with reduced PFC activation, fitting previously identified neural efficiency profiles. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Nonequivalence of updating rules in evolutionary games under high mutation rates
NASA Astrophysics Data System (ADS)
Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
NASA Astrophysics Data System (ADS)
Pecháček, T.; Goosmann, R. W.; Karas, V.; Czerny, B.; Dovčiak, M.
2013-08-01
Context. We study some general properties of accretion disc variability in the context of stationary random processes. In particular, we are interested in mathematical constraints that can be imposed on the functional form of the Fourier power-spectrum density (PSD) that exhibits a multiply broken shape and several local maxima. Aims: We develop a methodology for determining the regions of the model parameter space that can in principle reproduce a PSD shape with a given number and position of local peaks and breaks of the PSD slope. Given the vast space of possible parameters, it is an important requirement that the method is fast in estimating the PSD shape for a given parameter set of the model. Methods: We generated and discuss the theoretical PSD profiles of a shot-noise-type random process with exponentially decaying flares. Then we determined conditions under which one, two, or more breaks or local maxima occur in the PSD. We calculated positions of these features and determined the changing slope of the model PSD. Furthermore, we considered the influence of the modulation by the orbital motion for a variability pattern assumed to result from an orbiting-spot model. Results: We suggest that our general methodology can be useful for describing non-monotonic PSD profiles (such as the trend seen, on different scales, in exemplary cases of the high-mass X-ray binary Cygnus X-1 and the narrow-line Seyfert galaxy Ark 564). We adopt a model where these power spectra are reproduced as a superposition of several Lorentzians with varying amplitudes in the X-ray-band light curve. Our general approach can help in constraining the model parameters and in determining which parts of the parameter space are accessible under various circumstances.
Williams, Isobel Anne; Wilkinson, Leonora; Limousin, Patricia; Jahanshahi, Marjan
2015-01-01
Deep brain stimulation of the subthalamic nucleus (STN DBS) ameliorates the motor symptoms of Parkinson's disease (PD). However, some aspects of executive control are impaired with STN DBS. We tested the prediction that (i) STN DBS interferes with switching from automatic to controlled processing during fast-paced random number generation (RNG) (ii) STN DBS-induced cognitive control changes are load-dependent. Fifteen PD patients with bilateral STN DBS performed paced-RNG, under three levels of cognitive load synchronised with a pacing stimulus presented at 1, 0.5 and 0.33 Hz (faster rates require greater cognitive control), with DBS on or off. Measures of output randomness were calculated. Countscore 1 (CS1) indicates habitual counting in steps of one (CS1). Countscore 2 (CS2) indicates a more controlled strategy of counting in twos. The fastest rate was associated with an increased CS1 score with STN DBS on compared to off. At the slowest rate, patients had higher CS2 scores with DBS off than on, such that the differences between CS1 and CS2 scores disappeared. We provide evidence for a load-dependent effect of STN DBS on paced RNG in PD. Patients could switch to more controlled RNG strategies during conditions of low cognitive load at slower rates only when the STN stimulators were off, but when STN stimulation was on, they engaged in more automatic habitual counting under increased cognitive load. These findings are consistent with the proposal that the STN implements a switch signal from the medial frontal cortex which enables a shift from automatic to controlled processing.
Williams, Isobel Anne; Wilkinson, Leonora; Limousin, Patricia; Jahanshahi, Marjan
2015-01-01
Background: Deep brain stimulation of the subthalamic nucleus (STN DBS) ameliorates the motor symptoms of Parkinson’s disease (PD). However, some aspects of executive control are impaired with STN DBS. Objective: We tested the prediction that (i) STN DBS interferes with switching from automatic to controlled processing during fast-paced random number generation (RNG) (ii) STN DBS-induced cognitive control changes are load-dependent. Methods: Fifteen PD patients with bilateral STN DBS performed paced-RNG, under three levels of cognitive load synchronised with a pacing stimulus presented at 1, 0.5 and 0.33 Hz (faster rates require greater cognitive control), with DBS on or off. Measures of output randomness were calculated. Countscore 1 (CS1) indicates habitual counting in steps of one (CS1). Countscore 2 (CS2) indicates a more controlled strategy of counting in twos. Results: The fastest rate was associated with an increased CS1 score with STN DBS on compared to off. At the slowest rate, patients had higher CS2 scores with DBS off than on, such that the differences between CS1 and CS2 scores disappeared. Conclusions: We provide evidence for a load-dependent effect of STN DBS on paced RNG in PD. Patients could switch to more controlled RNG strategies during conditions of low cognitive load at slower rates only when the STN stimulators were off, but when STN stimulation was on, they engaged in more automatic habitual counting under increased cognitive load. These findings are consistent with the proposal that the STN implements a switch signal from the medial frontal cortex which enables a shift from automatic to controlled processing. PMID:25720447
Magis, David
2014-11-01
In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.
Wiegers, Maike; Metzger, Coraline D.; Walter, Martin; Grön, Georg; Abler, Birgit
2015-01-01
Background: Impaired sexual function is increasingly recognized as a side effect of psychopharmacological treatment. However, underlying mechanisms of action of the different drugs on sexual processing are still to be explored. Using functional magnetic resonance imaging, we previously investigated effects of serotonergic (paroxetine) and dopaminergic (bupropion) antidepressants on sexual functioning (Abler et al., 2011). Here, we studied the impact of noradrenergic and antidopaminergic medication on neural correlates of visual sexual stimulation in a new sample of subjects. Methods: Nineteen healthy heterosexual males (mean age 24 years, SD 3.1) under subchronic intake (7 days) of the noradrenergic agent reboxetine (4mg/d), the antidopaminergic agent amisulpride (200mg/d), and placebo were included and studied with functional magnetic resonance imaging within a randomized, double-blind, placebo-controlled, within-subjects design during an established erotic video-clip task. Subjective sexual functioning was assessed using the Massachusetts General Hospital-Sexual Functioning Questionnaire. Results: Relative to placebo, subjective sexual functioning was attenuated under reboxetine along with diminished neural activations within the caudate nucleus. Altered neural activations correlated with decreased sexual interest. Under amisulpride, neural activations and subjective sexual functioning remained unchanged. Conclusions: In line with previous interpretations of the role of the caudate nucleus in the context of primary reward processing, attenuated caudate activation may reflect detrimental effects on motivational aspects of erotic stimulus processing under noradrenergic agents. PMID:25612894
Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray
2014-05-13
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.
Quantifying Adventitious Error in a Covariance Structure as a Random Effect
Wu, Hao; Browne, Michael W.
2017-01-01
We present an approach to quantifying errors in covariance structures in which adventitious error, identified as the process underlying the discrepancy between the population and the structured model, is explicitly modeled as a random effect with a distribution, and the dispersion parameter of this distribution to be estimated gives a measure of misspecification. Analytical properties of the resultant procedure are investigated and the measure of misspecification is found to be related to the RMSEA. An algorithm is developed for numerical implementation of the procedure. The consistency and asymptotic sampling distributions of the estimators are established under a new asymptotic paradigm and an assumption weaker than the standard Pitman drift assumption. Simulations validate the asymptotic sampling distributions and demonstrate the importance of accounting for the variations in the parameter estimates due to adventitious error. Two examples are also given as illustrations. PMID:25813463
Role of protein fluctuation correlations in electron transfer in photosynthetic complexes.
Nesterov, Alexander I; Berman, Gennady P
2015-04-01
We consider the dependence of the electron transfer in photosynthetic complexes on correlation properties of random fluctuations of the protein environment. The electron subsystem is modeled by a finite network of connected electron (exciton) sites. The fluctuations of the protein environment are modeled by random telegraph processes, which act either collectively (correlated) or independently (uncorrelated) on the electron sites. We derived an exact closed system of first-order linear differential equations with constant coefficients, for the average density matrix elements and for their first moments. Under some conditions, we obtained analytic expressions for the electron transfer rates and found the range of parameters for their applicability by comparing with the exact numerical simulations. We also compared the correlated and uncorrelated regimes and demonstrated numerically that the uncorrelated fluctuations of the protein environment can, under some conditions, either increase or decrease the electron transfer rates.
NASA Astrophysics Data System (ADS)
Ma, Jin-fang; Wang, Guang-wei; Zhang, Jian-liang; Li, Xin-yu; Liu, Zheng-jian; Jiao, Ke-xin; Guo, Jian
2017-05-01
In this work, the reduction behavior of vanadium-titanium sinters was studied under five different sets of conditions of pulverized coal injection with oxygen enrichment. The modified random pore model was established to analyze the reduction kinetics. The results show that the reduction rate of sinters was accelerated by an increase of CO and H2 contents. Meanwhile, with the increase in CO and H2 contents, the increasing range of the medium reduction index (MRE) of sinters decreased. The increasing oxygen enrichment ratio played a diminishing role in improving the reduction behavior of the sinters. The reducing process kinetic parameters were solved using the modified random role model. The results indicated that, with increasing oxygen enrichment, the contents of CO and H2 in the reducing gas increased. The reduction activation energy of the sinters decreased to between 20.4 and 23.2 kJ/mol.
Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bo Lijun; Wang Yongjin; Yang Xuewei, E-mail: xwyangnk@yahoo.com.cn
2013-08-01
We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.
Shifting the focus to practice quality improvement in radiation oncology.
Crozier, Cheryl; Erickson-Wittmann, Beth; Movsas, Benjamin; Owen, Jean; Khalid, Najma; Wilson, J Frank
2011-09-01
To demonstrate how the American College of Radiology, Quality Research in Radiation Oncology (QRRO) process survey database can serve as an evidence base for assessing quality of care in radiation oncology. QRRO has drawn a stratified random sample of radiation oncology facilities in the USA and invited those facilities to participate in a Process Survey. Information from a prior QRRO Facilities Survey has been used along with data collected under the current National Process Survey to calculate national averages and make statistically valid inferences for national process measures for selected cancers in which radiation therapy plays a major role. These measures affect outcomes important to patients and providers and measure quality of care. QRRO's survey data provides national benchmark data for numerous quality indicators. The Process Survey is "fully qualified" as a Practice Quality Improvement project by the American Board of Radiology under its Maintenance of Certification requirements for radiation oncology and radiation physics. © 2011 National Association for Healthcare Quality.
Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E
2001-01-01
Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.
ERIC Educational Resources Information Center
Nutting, Paul A.; And Others
Six Indian Health Service (IHS) units, chosen in a non-random manner, were evaluated via a quality assessment methodology currently under development by the IHS Office of Research and Development. A set of seven health problems (tracers) was selected to represent major health problems, and clinical algorithms (process maps) were constructed for…
Error Sources in Asteroid Astrometry
NASA Technical Reports Server (NTRS)
Owen, William M., Jr.
2000-01-01
Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.
Signaling in large-scale neural networks.
Berg, Rune W; Hounsgaard, Jørn
2009-02-01
We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages of this metabolically costly organization are analyzed by comparing with synaptically less intense networks driven by the intrinsic response properties of the network neurons.
ERIC Educational Resources Information Center
Bolkan, San
2017-01-01
This study examined how, and under what conditions, teacher clarity (i.e., structure/signaling) impacts student learning. One hundred and forty eight students reported their propensity to approach their studies with a mastery orientation and were randomly exposed to a lesson on persuasion that was either signaled or not. After the lesson, students…
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
Development of an ideal observer that incorporates nuisance parameters and processes list-mode data
MacGahan, Christopher Jonathan; Kupinski, Matthew Alan; Hilton, Nathan R.; ...
2016-02-01
Observer models were developed to process data in list-mode format in order to perform binary discrimination tasks for use in an arms-control-treaty context. Data used in this study was generated using GEANT4 Monte Carlo simulations for photons using custom models of plutonium inspection objects and a radiation imaging system. We evaluated observer model performance and then presented using the area under the receiver operating characteristic curve. Lastly, we studied the ideal observer under both signal-known-exactly conditions and in the presence of unknowns such as object orientation and absolute count-rate variability; when these additional sources of randomness were present, their incorporationmore » into the observer yielded superior performance.« less
Computing diffusivities from particle models out of equilibrium
NASA Astrophysics Data System (ADS)
Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia
2018-04-01
A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.
Resolvent-Techniques for Multiple Exercise Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Sören, E-mail: christensen@math.uni-kiel.de; Lempa, Jukka, E-mail: jukka.lempa@hioa.no
2015-02-15
We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristicsmore » of the problems can be identified more explicitly. We illustrate the main results with explicit examples.« less
Robust non-parametric one-sample tests for the analysis of recurrent events.
Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia
2010-12-30
One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. Copyright © 2010 John Wiley & Sons, Ltd.
DNA asymmetry in stem cells - immortal or mortal?
Yadlapalli, Swathi; Yamashita, Yukiko M
2013-09-15
The immortal strand hypothesis proposes that stem cells retain a template copy of genomic DNA (i.e. an 'immortal strand') to avoid replication-induced mutations. An alternative hypothesis suggests that certain cells segregate sister chromatids non-randomly to transmit distinct epigenetic information. However, this area of research has been highly controversial, with conflicting data even from the same cell types. Moreover, historically, the same term of 'non-random sister chromatid segregation' or 'biased sister chromatid segregation' has been used to indicate distinct biological processes, generating a confusion in the biological significance and potential mechanism of each phenomenon. Here, we discuss the models of non-random sister chromatid segregation, and we explore the strengths and limitations of the various techniques and experimental model systems used to study this question. We also describe our recent study on Drosophila male germline stem cells, where sister chromatids of X and Y chromosomes are segregated non-randomly during cell division. We aim to integrate the existing evidence to speculate on the underlying mechanisms and biological relevance of this long-standing observation on non-random sister chromatid segregation.
Individualizing drug dosage with longitudinal data.
Zhu, Xiaolu; Qu, Annie
2016-10-30
We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
DNA asymmetry in stem cells – immortal or mortal?
Yadlapalli, Swathi; Yamashita, Yukiko M.
2013-01-01
Summary The immortal strand hypothesis proposes that stem cells retain a template copy of genomic DNA (i.e. an ‘immortal strand’) to avoid replication-induced mutations. An alternative hypothesis suggests that certain cells segregate sister chromatids non-randomly to transmit distinct epigenetic information. However, this area of research has been highly controversial, with conflicting data even from the same cell types. Moreover, historically, the same term of ‘non-random sister chromatid segregation’ or ‘biased sister chromatid segregation’ has been used to indicate distinct biological processes, generating a confusion in the biological significance and potential mechanism of each phenomenon. Here, we discuss the models of non-random sister chromatid segregation, and we explore the strengths and limitations of the various techniques and experimental model systems used to study this question. We also describe our recent study on Drosophila male germline stem cells, where sister chromatids of X and Y chromosomes are segregated non-randomly during cell division. We aim to integrate the existing evidence to speculate on the underlying mechanisms and biological relevance of this long-standing observation on non-random sister chromatid segregation. PMID:23970416
Surgery for post-vitrectomy cataract
Do, Diana V; Gichuhi, Stephen; Vedula, Satyanarayana S; Hawkins, Barbara S
2014-01-01
Background Cataract formation or acceleration can occur after intraocular surgery, especially following vitrectomy, a surgical technique for removing the vitreous which is used in the treatment of disorders that affect the posterior segment of the eye. The underlying problem that led to vitrectomy may limit the benefit from cataract surgery. Objectives The objective of this review was to evaluate the effectiveness and safety of surgery for post-vitrectomy cataract with respect to visual acuity, quality of life, and other outcomes. Search methods We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (The Cochrane Library 2013, Issue 4), Ovid MEDLINE, Ovid MEDLINE in-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily Update, Ovid OLDMED-LINE (January 1946 to May 2013), EMBASE (January 1980 to May 2013, Latin American and Caribbean Health Sciences Literature Database (LILACS) (January 1982 to May 2013), PubMed (January 1946 to May 2013), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrial.gov) and the WHO International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 22 May 2013. Selection criteria We planned to include randomized and quasi-randomized controlled trials comparing cataract surgery with no surgery in adult patients who developed cataract following vitrectomy. Data collection and analysis Two authors screened the search results independently according to the standard methodological procedures expected by The Cochrane Collaboration. Main results We found no randomized or quasi-randomized controlled trials comparing cataract surgery with no cataract surgery for patients who developed cataracts following vitrectomy surgery. Authors' conclusions There is no evidence from randomized or quasi-randomized controlled trials on which to base clinical recommendations for surgery for post-vitrectomy cataract. There is a clear need for randomized controlled trials to address this evidence gap. Such trials should stratify participants by their age, the retinal disorder leading to vitrectomy, and the status of the underlying disease process in the contralateral eye. Outcomes assessed in such trials may include gain of vision on the Early Treatment Diabetic Retinopathy Study (ETDRS) scale, quality of life, and adverse events such as posterior capsular rupture. Both short-term (six-month) and long-term (one-year or two-year) outcomes should be examined. PMID:24357418
Phylogenetic mixtures and linear invariants for equal input models.
Casanellas, Marta; Steel, Mike
2017-04-01
The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).
The Effect of the Underlying Distribution in Hurst Exponent Estimation
Sánchez, Miguel Ángel; Trinidad, Juan E.; García, José; Fernández, Manuel
2015-01-01
In this paper, a heavy-tailed distribution approach is considered in order to explore the behavior of actual financial time series. We show that this kind of distribution allows to properly fit the empirical distribution of the stocks from S&P500 index. In addition to that, we explain in detail why the underlying distribution of the random process under study should be taken into account before using its self-similarity exponent as a reliable tool to state whether that financial series displays long-range dependence or not. Finally, we show that, under this model, no stocks from S&P500 index show persistent memory, whereas some of them do present anti-persistent memory and most of them present no memory at all. PMID:26020942
Analysis of backward error recovery for concurrent processes with recovery blocks
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1982-01-01
Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.
Inferring Group Processes from Computer-Mediated Affective Text Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack C; Begoli, Edmon; Jose, Ajith
2011-02-01
Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Severalmore » useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.« less
Enhancements and Algorithms for Avionic Information Processing System Design Methodology.
1982-06-16
programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP
Nuclear structure and weak rates of heavy waiting point nuclei under rp-process conditions
NASA Astrophysics Data System (ADS)
Nabi, Jameel-Un; Böyükata, Mahmut
2017-01-01
The structure and the weak interaction mediated rates of the heavy waiting point (WP) nuclei 80Zr, 84Mo, 88Ru, 92Pd and 96Cd along N = Z line were studied within the interacting boson model-1 (IBM-1) and the proton-neutron quasi-particle random phase approximation (pn-QRPA). The energy levels of the N = Z WP nuclei were calculated by fitting the essential parameters of IBM-1 Hamiltonian and their geometric shapes were predicted by plotting potential energy surfaces (PESs). Half-lives, continuum electron capture rates, positron decay rates, electron capture cross sections of WP nuclei, energy rates of β-delayed protons and their emission probabilities were later calculated using the pn-QRPA. The calculated Gamow-Teller strength distributions were compared with previous calculation. We present positron decay and continuum electron capture rates on these WP nuclei under rp-process conditions using the same model. For the rp-process conditions, the calculated total weak rates are twice the Skyrme HF+BCS+QRPA rates for 80Zr. For remaining nuclei the two calculations compare well. The electron capture rates are significant and compete well with the corresponding positron decay rates under rp-process conditions. The finding of the present study supports that electron capture rates form an integral part of the weak rates under rp-process conditions and has an important role for the nuclear model calculations.
Graf, Heiko; Wiegers, Maike; Metzger, Coraline D; Walter, Martin; Grön, Georg; Abler, Birgit
2014-10-31
Impaired sexual function is increasingly recognized as a side effect of psychopharmacological treatment. However, underlying mechanisms of action of the different drugs on sexual processing are still to be explored. Using functional magnetic resonance imaging, we previously investigated effects of serotonergic (paroxetine) and dopaminergic (bupropion) antidepressants on sexual functioning (Abler et al., 2011). Here, we studied the impact of noradrenergic and antidopaminergic medication on neural correlates of visual sexual stimulation in a new sample of subjects. Nineteen healthy heterosexual males (mean age 24 years, SD 3.1) under subchronic intake (7 days) of the noradrenergic agent reboxetine (4 mg/d), the antidopaminergic agent amisulpride (200mg/d), and placebo were included and studied with functional magnetic resonance imaging within a randomized, double-blind, placebo-controlled, within-subjects design during an established erotic video-clip task. Subjective sexual functioning was assessed using the Massachusetts General Hospital-Sexual Functioning Questionnaire. Relative to placebo, subjective sexual functioning was attenuated under reboxetine along with diminished neural activations within the caudate nucleus. Altered neural activations correlated with decreased sexual interest. Under amisulpride, neural activations and subjective sexual functioning remained unchanged. In line with previous interpretations of the role of the caudate nucleus in the context of primary reward processing, attenuated caudate activation may reflect detrimental effects on motivational aspects of erotic stimulus processing under noradrenergic agents. © The Author 2015. Published by Oxford University Press on behalf of CINP.
Effect of SiC particle impact nano-texturing on tribological performance of 304L stainless steel
NASA Astrophysics Data System (ADS)
Lorenzo-Martin, C.; Ajayi, O. O.
2014-10-01
Topographical features on sliding contact surfaces are known to have a significant impact on friction and wear. Indeed, various forms of surface texturing are being used to improve and/or control the tribological performance of sliding surfaces. In this paper, the effect of random surface texturing produced by a mechanical impact process is studied for friction and wear behavior of 304L stainless steel (SS) under dry and marginal oil lubrication. The surface processing was applied to 304L SS flat specimens and tested under reciprocating ball-on-flat sliding contact, with a 440C stainless steel ball. Under dry contact, the impact textured surface exhibited two order of magnitude lower wear than the isotropically ground surface of the same material. After 1500 s of sliding and wearing through of the processed surface layer following occurring of scuffing, the impact textured surface underwent a transition in wear and friction behavior. Under marginal oil lubrication, however, no such transition occurred, and the wear for the impact textured surface was consistently two orders of magnitude lower than that for the ground material. Mechanisms for the tribological performance enhancement are proposed.
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
Gaussian random bridges and a geometric model for information equilibrium
NASA Astrophysics Data System (ADS)
Mengütürk, Levent Ali
2018-03-01
The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Cancelable biometrics realization with multispace random projections.
Teoh, Andrew Beng Jin; Yuang, Chong Tze
2007-10-01
Biometric characteristics cannot be changed; therefore, the loss of privacy is permanent if they are ever compromised. This paper presents a two-factor cancelable formulation, where the biometric data are distorted in a revocable but non-reversible manner by first transforming the raw biometric data into a fixed-length feature vector and then projecting the feature vector onto a sequence of random subspaces that were derived from a user-specific pseudorandom number (PRN). This process is revocable and makes replacing biometrics as easy as replacing PRNs. The formulation has been verified under a number of scenarios (normal, stolen PRN, and compromised biometrics scenarios) using 2400 Facial Recognition Technology face images. The diversity property is also examined.
Planetesimal and Protoplanet Dynamics in a Turbulent Protoplanetary Disk
NASA Astrophysics Data System (ADS)
Yang, Chao-Chin; Mac Low, M.; Menou, K.
2010-01-01
In core accretion scenario of planet formation, kilometer-sized planetesimals are the building blocks toward planetary cores. Their dynamics, however, are strongly influenced by their natal protoplanetary gas disks. It is generally believed that these disks are turbulent, most likely due to magnetorotational instability. The resulting density perturbations in the gas render the movement of the particles a random process. Depending on its strength, this process might cause several interesting consequences in the course of planet formation, specifically the survivability of objects under rapid inward type-I migration and/or collisional destruction. Using the local-shearing-box approximation, we conduct numerical simulations of planetesimals moving in a turbulent, magnetized gas disk, either unstratified or vertically stratified. We produce a fiducial disk model with turbulent accretion of Shakura-Sunyaev alpha about 10-2 and root-mean-square density perturbation of about 10% and statistically characterize the evolution of the orbital properties of the particles moving in the disk. These measurements result in accurate calibration of the random process of particle orbital change, indicating noticeably smaller magnitudes than predicted by global simulations, although the results may depend on the size of the shearing box. We apply these results to revisit the survivability of planetesimals under collisional destruction or protoplanets under type-I migration. Planetesimals are probably secure from collisional destruction, except for kilometer-sized objects situated in the outer regions of a young protoplanetary disk. On the other hand, we confirm earlier studies of local models in that type-I migration probably dominates diffusive migration due to stochastic torques for most planetary cores and terrestrial planets. Discrepancies in the derived magnitude of turbulence between local and global simulations of magnetorotationally unstable disks remains an open issue, with important consequences for planet formation scenarios.
A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes
2015-01-01
Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222
Eaves, Lindon J.; Maes, Hermine; Silberg, Judy L.
2015-01-01
We tested two models to identify the genetic and environmental processes underlying longitudinal changes in depression among adolescents. The first assumes that observed changes in covariance structure result from the unfolding of inherent, random individual differences in the overall levels and rates of change in depression over time (random growth curves). The second assumes that observed changes are due to time-specific random effects (innovations) accumulating over time (autoregressive effects). We found little evidence of age-specific genetic effects or persistent genetic innovations. Instead, genetic effects are consistent with a gradual unfolding in the liability to depression and rates of change with increasing age. Likewise, the environment also creates significant individual differences in overall levels of depression and rates of change. However, there are also time-specific environmental experiences that persist with fidelity. The implications of these differing genetic and environmental mechanisms in the etiology of depression are considered. PMID:25894924
Gillespie, Nathan A; Eaves, Lindon J; Maes, Hermine; Silberg, Judy L
2015-07-01
We tested two models to identify the genetic and environmental processes underlying longitudinal changes in depression among adolescents. The first assumes that observed changes in covariance structure result from the unfolding of inherent, random individual differences in the overall levels and rates of change in depression over time (random growth curves). The second assumes that observed changes are due to time-specific random effects (innovations) accumulating over time (autoregressive effects). We found little evidence of age-specific genetic effects or persistent genetic innovations. Instead, genetic effects are consistent with a gradual unfolding in the liability to depression and rates of change with increasing age. Likewise, the environment also creates significant individual differences in overall levels of depression and rates of change. However, there are also time-specific environmental experiences that persist with fidelity. The implications of these differing genetic and environmental mechanisms in the etiology of depression are considered.
Urbanisation tolerance and the loss of avian diversity.
Sol, Daniel; González-Lagos, Cesar; Moreira, Darío; Maspons, Joan; Lapiedra, Oriol
2014-08-01
Urbanisation is considered an important driver of current biodiversity loss, but the underlying causes are not fully understood. It is generally assumed that this loss reflects the fact that most organisms do not tolerate well the environmental alterations associated with urbanisation. Nevertheless, current evidence is inconclusive and the alternative that the biodiversity loss is the result of random mechanisms has never been evaluated. Analysing changes in abundance between urbanised environments and their non-urbanised surroundings of > 800 avian species from five continents, we show here that although random processes account for part of the species loss associated with urbanisation, much of the loss is associated with a lack of appropriate adaptations of most species for exploiting resources and avoiding risks of the urban environments. These findings have important conservation implications because the extinction of species with particular features should have higher impact on biodiversity and ecosystem function than a random loss. © 2014 John Wiley & Sons Ltd/CNRS.
Optimizing Urine Processing Protocols for Protein and Metabolite Detection.
Siddiqui, Nazema Y; DuBois, Laura G; St John-Williams, Lisa; Will, Thompson J; Grenier, Carole; Burke, Emily; Fraser, Matthew O; Amundsen, Cindy L; Murphy, Susan K
In urine, factors such as timing of voids, and duration at room temperature (RT) may affect the quality of recovered protein and metabolite data. Additives may aid with detection, but can add more complexity in sample collection or analysis. We aimed to identify the optimal urine processing protocol for clinically-obtained urine samples that allows for the highest protein and metabolite yields with minimal degradation. Healthy women provided multiple urine samples during the same day. Women collected their first morning (1 st AM) void and another "random void". Random voids were aliquotted with: 1) no additive; 2) boric acid (BA); 3) protease inhibitor (PI); or 4) both BA + PI. Of these aliquots, some were immediately stored at 4°C, and some were left at RT for 4 hours. Proteins and individual metabolites were quantified, normalized to creatinine concentrations, and compared across processing conditions. Sample pools corresponding to each processing condition were analyzed using mass spectrometry to assess protein degradation. Ten Caucasian women between 35-65 years of age provided paired 1 st morning and random voided urine samples. Normalized protein concentrations were slightly higher in 1 st AM compared to random "spot" voids. The addition of BA did not significantly change proteins, while PI significantly improved normalized protein concentrations, regardless of whether samples were immediately cooled or left at RT for 4 hours. In pooled samples, there were minimal differences in protein degradation under the various conditions we tested. In metabolite analyses, there were significant differences in individual amino acids based on the timing of the void. For comparative translational research using urine, information about void timing should be collected and standardized. For urine samples processed in the same day, BA does not appear to be necessary while the addition of PI enhances protein yields, regardless of 4°C or RT storage temperature.
Experimentally generated randomness certified by the impossibility of superluminal signals.
Bierhorst, Peter; Knill, Emanuel; Glancy, Scott; Zhang, Yanbao; Mink, Alan; Jordan, Stephen; Rommal, Andrea; Liu, Yi-Kai; Christensen, Bradley; Nam, Sae Woo; Stevens, Martin J; Shalm, Lynden K
2018-04-01
From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable 1-3 . For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity 1-11 . With recent technological developments, it is now possible to carry out such a loophole-free Bell test 12-14,22 . Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10 -12 . These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.
Probability of stress-corrosion fracture under random loading.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
A method is developed for predicting the probability of stress-corrosion fracture of structures under random loadings. The formulation is based on the cumulative damage hypothesis and the experimentally determined stress-corrosion characteristics. Under both stationary and nonstationary random loadings, the mean value and the variance of the cumulative damage are obtained. The probability of stress-corrosion fracture is then evaluated using the principle of maximum entropy. It is shown that, under stationary random loadings, the standard deviation of the cumulative damage increases in proportion to the square root of time, while the coefficient of variation (dispersion) decreases in inversed proportion to the square root of time. Numerical examples are worked out to illustrate the general results.
Phenotypic switching of populations of cells in a stochastic environment
NASA Astrophysics Data System (ADS)
Hufton, Peter G.; Lin, Yen Ting; Galla, Tobias
2018-02-01
In biology phenotypic switching is a common bet-hedging strategy in the face of uncertain environmental conditions. Existing mathematical models often focus on periodically changing environments to determine the optimal phenotypic response. We focus on the case in which the environment switches randomly between discrete states. Starting from an individual-based model we derive stochastic differential equations to describe the dynamics, and obtain analytical expressions for the mean instantaneous growth rates based on the theory of piecewise-deterministic Markov processes. We show that optimal phenotypic responses are non-trivial for slow and intermediate environmental processes, and systematically compare the cases of periodic and random environments. The best response to random switching is more likely to be heterogeneity than in the case of deterministic periodic environments, net growth rates tend to be higher under stochastic environmental dynamics. The combined system of environment and population of cells can be interpreted as host-pathogen interaction, in which the host tries to choose environmental switching so as to minimise growth of the pathogen, and in which the pathogen employs a phenotypic switching optimised to increase its growth rate. We discuss the existence of Nash-like mutual best-response scenarios for such host-pathogen games.
Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo
2010-01-01
The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.
Robust-yet-fragile nature of interdependent networks
NASA Astrophysics Data System (ADS)
Tan, Fei; Xia, Yongxiang; Wei, Zhi
2015-05-01
Interdependent networks have been shown to be extremely vulnerable based on the percolation model. Parshani et al. [Europhys. Lett. 92, 68002 (2010), 10.1209/0295-5075/92/68002] further indicated that the more intersimilar networks are, the more robust they are to random failures. When traffic load is considered, how do the coupling patterns impact cascading failures in interdependent networks? This question has been largely unexplored until now. In this paper, we address this question by investigating the robustness of interdependent Erdös-Rényi random graphs and Barabási-Albert scale-free networks under either random failures or intentional attacks. It is found that interdependent Erdös-Rényi random graphs are robust yet fragile under either random failures or intentional attacks. Interdependent Barabási-Albert scale-free networks, however, are only robust yet fragile under random failures but fragile under intentional attacks. We further analyze the interdependent communication network and power grid and achieve similar results. These results advance our understanding of how interdependency shapes network robustness.
Schlomann, Brandon H
2018-06-06
A central problem in population ecology is understanding the consequences of stochastic fluctuations. Analytically tractable models with Gaussian driving noise have led to important, general insights, but they fail to capture rare, catastrophic events, which are increasingly observed at scales ranging from global fisheries to intestinal microbiota. Due to mathematical challenges, growth processes with random catastrophes are less well characterized and it remains unclear how their consequences differ from those of Gaussian processes. In the face of a changing climate and predicted increases in ecological catastrophes, as well as increased interest in harnessing microbes for therapeutics, these processes have never been more relevant. To better understand them, I revisit here a differential equation model of logistic growth coupled to density-independent catastrophes that arrive as a Poisson process, and derive new analytic results that reveal its statistical structure. First, I derive exact expressions for the model's stationary moments, revealing a single effective catastrophe parameter that largely controls low order statistics. Then, I use weak convergence theorems to construct its Gaussian analog in a limit of frequent, small catastrophes, keeping the stationary population mean constant for normalization. Numerically computing statistics along this limit shows how they transform as the dynamics shifts from catastrophes to diffusions, enabling quantitative comparisons. For example, the mean time to extinction increases monotonically by orders of magnitude, demonstrating significantly higher extinction risk under catastrophes than under diffusions. Together, these results provide insight into a wide range of stochastic dynamical systems important for ecology and conservation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications
NASA Technical Reports Server (NTRS)
Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Staller, C.; Zhou, Z;
1994-01-01
JPL, under sponsorship from the NASA Office of Advanced Concepts and Technology, has been developing a second-generation solid-state image sensor technology. Charge-coupled devices (CCD) are a well-established first generation image sensor technology. For both commercial and NASA applications, CCDs have numerous shortcomings. In response, the active pixel sensor (APS) technology has been under research. The major advantages of APS technology are the ability to integrate on-chip timing, control, signal-processing and analog-to-digital converter functions, reduced sensitivity to radiation effects, low power operation, and random access readout.
Aligned and Unaligned Coherence: A New Diagnostic Tool
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2006-01-01
The study of combustion noise from turbofan engines has become important again as the noise from other sources like the fan and jet are reduced. A method has been developed to help identify combustion noise spectra using an aligned and unaligned coherence technique. When used with the well known three signal coherent power method and coherent power method it provides new information by separating tonal information from random process information. Examples are presented showing the underlying tonal structure which is buried under broadband noise and jet noise. The method is applied to data from a Pratt and Whitney PW4098 turbofan engine.
The contribution of simple random sampling to observed variations in faecal egg counts.
Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I
2012-09-10
It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.
Kendall, W.L.; Nichols, J.D.; North, P.M.; Nichols, J.D.
1995-01-01
The use of the Cormack- Jolly-Seber model under a standard sampling scheme of one sample per time period, when the Jolly-Seber assumption that all emigration is permanent does not hold, leads to the confounding of temporary emigration probabilities with capture probabilities. This biases the estimates of capture probability when temporary emigration is a completely random process, and both capture and survival probabilities when there is a temporary trap response in temporary emigration, or it is Markovian. The use of secondary capture samples over a shorter interval within each period, during which the population is assumed to be closed (Pollock's robust design), provides a second source of information on capture probabilities. This solves the confounding problem, and thus temporary emigration probabilities can be estimated. This process can be accomplished in an ad hoc fashion for completely random temporary emigration and to some extent in the temporary trap response case, but modelling the complete sampling process provides more flexibility and permits direct estimation of variances. For the case of Markovian temporary emigration, a full likelihood is required.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
On Nonconvex Decentralized Gradient Descent
2016-08-01
and J. Bolte, On the convergence of the proximal algorithm for nonsmooth functions involving analytic features, Math . Program., 116: 5-16, 2009. [2] H...splitting, and regularized Gauss-Seidel methods, Math . Pro- gram., Ser. A, 137: 91-129, 2013. [3] P. Bianchi and J. Jakubowicz, Convergence of a multi-agent...subgradient method under random communication topologies , IEEE J. Sel. Top. Signal Process., 5:754-771, 2011. [11] A. Nedic and A. Ozdaglar, Distributed
Emergence of biological organization through thermodynamic inversion.
Kompanichenko, Vladimir
2014-01-01
Biological organization arises under thermodynamic inversion in prebiotic systems that provide the prevalence of free energy and information contribution over the entropy contribution. The inversion might occur under specific far-from-equilibrium conditions in prebiotic systems oscillating around the bifurcation point. At the inversion moment, (physical) information characteristic of non-biological systems acquires the new features: functionality, purposefulness, and control over the life processes, which transform it into biological information. Random sequences of amino acids and nucleotides, spontaneously synthesized in the prebiotic microsystem, in the primary living unit (probiont) re-assemble into functional sequences, involved into bioinformation circulation through nucleoprotein interaction, resulted in the genetic code emergence. According to the proposed concept, oscillating three-dimensional prebiotic microsystems transformed into probionts in the changeable hydrothermal medium of the early Earth. The inversion concept states that spontaneous (accidental, random) transformations in prebiotic systems cannot produce life; it is only non-spontaneous (perspective, purposeful) transformations, which are the result of thermodynamic inversion, that lead to the negentropy conversion of prebiotic systems into initial living units.
A random walk model for evaluating clinical trials involving serial observations.
Hopper, J L; Young, G P
1988-05-01
For clinical trials where the variable of interest is ordered and categorical (for example, disease severity, symptom scale), and where measurements are taken at intervals, it might be possible to achieve a greater discrimination between the efficacy of treatments by modelling each patient's progress as a stochastic process. The random walk is a simple, easily interpreted model that can be fitted by maximum likelihood using a maximization routine with inference based on standard likelihood theory. In general the model can allow for randomly censored data, incorporates measured prognostic factors, and inference is conditional on the (possibly non-random) allocation of patients. Tests of fit and of model assumptions are proposed, and application to two therapeutic trials of gastroenterological disorders are presented. The model gave measures of the rate of, and variability in, improvement for patients under different treatments. A small simulation study suggested that the model is more powerful than considering the difference between initial and final scores, even when applied to data generated by a mechanism other than the random walk model assumed in the analysis. It thus provides a useful additional statistical method for evaluating clinical trials.
Operational Modal Analysis of Bridge Structures with Data from GNSS/Accelerometer Measurements.
Xiong, Chunbao; Lu, Huali; Zhu, Jinsong
2017-02-23
Real-time dynamic displacement and acceleration responses of the main span section of the Tianjin Fumin Bridge in China under ambient excitation were tested using a Global Navigation Satellite System (GNSS) dynamic deformation monitoring system and an acceleration sensor vibration test system. Considering the close relationship between the GNSS multipath errors and measurement environment in combination with the noise reduction characteristics of different filtering algorithms, the researchers proposed an AFEC mixed filtering algorithm, which is an combination of autocorrelation function-based empirical mode decomposition (EMD) and Chebyshev mixed filtering to extract the real vibration displacement of the bridge structure after system error correction and filtering de-noising of signals collected by the GNSS. The proposed AFEC mixed filtering algorithm had high accuracy (1 mm) of real displacement at the elevation direction. Next, the traditional random decrement technique (used mainly for stationary random processes) was expanded to non-stationary random processes. Combining the expanded random decrement technique (RDT) and autoregressive moving average model (ARMA), the modal frequency of the bridge structural system was extracted using an expanded ARMA_RDT modal identification method, which was compared with the power spectrum analysis results of the acceleration signal and finite element analysis results. Identification results demonstrated that the proposed algorithm is applicable to analyze the dynamic displacement monitoring data of real bridge structures under ambient excitation and could identify the first five orders of the inherent frequencies of the structural system accurately. The identification error of the inherent frequency was smaller than 6%, indicating the high identification accuracy of the proposed algorithm. Furthermore, the GNSS dynamic deformation monitoring method can be used to monitor dynamic displacement and identify the modal parameters of bridge structures. The GNSS can monitor the working state of bridges effectively and accurately. Research results can provide references to evaluate the bearing capacity, safety performance, and durability of bridge structures during operation.
Operational Modal Analysis of Bridge Structures with Data from GNSS/Accelerometer Measurements
Xiong, Chunbao; Lu, Huali; Zhu, Jinsong
2017-01-01
Real-time dynamic displacement and acceleration responses of the main span section of the Tianjin Fumin Bridge in China under ambient excitation were tested using a Global Navigation Satellite System (GNSS) dynamic deformation monitoring system and an acceleration sensor vibration test system. Considering the close relationship between the GNSS multipath errors and measurement environment in combination with the noise reduction characteristics of different filtering algorithms, the researchers proposed an AFEC mixed filtering algorithm, which is an combination of autocorrelation function-based empirical mode decomposition (EMD) and Chebyshev mixed filtering to extract the real vibration displacement of the bridge structure after system error correction and filtering de-noising of signals collected by the GNSS. The proposed AFEC mixed filtering algorithm had high accuracy (1 mm) of real displacement at the elevation direction. Next, the traditional random decrement technique (used mainly for stationary random processes) was expanded to non-stationary random processes. Combining the expanded random decrement technique (RDT) and autoregressive moving average model (ARMA), the modal frequency of the bridge structural system was extracted using an expanded ARMA_RDT modal identification method, which was compared with the power spectrum analysis results of the acceleration signal and finite element analysis results. Identification results demonstrated that the proposed algorithm is applicable to analyze the dynamic displacement monitoring data of real bridge structures under ambient excitation and could identify the first five orders of the inherent frequencies of the structural system accurately. The identification error of the inherent frequency was smaller than 6%, indicating the high identification accuracy of the proposed algorithm. Furthermore, the GNSS dynamic deformation monitoring method can be used to monitor dynamic displacement and identify the modal parameters of bridge structures. The GNSS can monitor the working state of bridges effectively and accurately. Research results can provide references to evaluate the bearing capacity, safety performance, and durability of bridge structures during operation. PMID:28241472
Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes
NASA Astrophysics Data System (ADS)
Orsingher, Enzo; Polito, Federico
2012-08-01
In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.
Evolutionary games on cycles with strong selection
NASA Astrophysics Data System (ADS)
Altrock, P. M.; Traulsen, A.; Nowak, M. A.
2017-02-01
Evolutionary games on graphs describe how strategic interactions and population structure determine evolutionary success, quantified by the probability that a single mutant takes over a population. Graph structures, compared to the well-mixed case, can act as amplifiers or suppressors of selection by increasing or decreasing the fixation probability of a beneficial mutant. Properties of the associated mean fixation times can be more intricate, especially when selection is strong. The intuition is that fixation of a beneficial mutant happens fast in a dominance game, that fixation takes very long in a coexistence game, and that strong selection eliminates demographic noise. Here we show that these intuitions can be misleading in structured populations. We analyze mean fixation times on the cycle graph under strong frequency-dependent selection for two different microscopic evolutionary update rules (death-birth and birth-death). We establish exact analytical results for fixation times under strong selection and show that there are coexistence games in which fixation occurs in time polynomial in population size. Depending on the underlying game, we observe inherence of demographic noise even under strong selection if the process is driven by random death before selection for birth of an offspring (death-birth update). In contrast, if selection for an offspring occurs before random removal (birth-death update), then strong selection can remove demographic noise almost entirely.
Fisher's geometric model predicts the effects of random mutations when tested in the wild.
Stearns, Frank W; Fenster, Charles B
2016-02-01
Fisher's geometric model of adaptation (FGM) has been the conceptual foundation for studies investigating the genetic basis of adaptation since the onset of the neo Darwinian synthesis. FGM describes adaptation as the movement of a genotype toward a fitness optimum due to beneficial mutations. To date, one prediction of FGM, the probability of improvement is related to the distance from the optimum, has only been tested in microorganisms under laboratory conditions. There is reason to believe that results might differ under natural conditions where more mutations likely affect fitness, and where environmental variance may obscure the expected pattern. We chemically induced mutations into a set of 19 Arabidopsis thaliana accessions from across the native range of A. thaliana and planted them alongside the premutated founder lines in two habitats in the mid-Atlantic region of the United States under field conditions. We show that FGM is able to predict the outcome of a set of random induced mutations on fitness in a set of A. thaliana accessions grown in the wild: mutations are more likely to be beneficial in relatively less fit genotypes. This finding suggests that FGM is an accurate approximation of the process of adaptation under more realistic ecological conditions. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele
2016-07-01
This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.
Probabilistic SSME blades structural response under random pulse loading
NASA Technical Reports Server (NTRS)
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
Non-linear continuous time random walk models★
NASA Astrophysics Data System (ADS)
Stage, Helena; Fedotov, Sergei
2017-11-01
A standard assumption of continuous time random walk (CTRW) processes is that there are no interactions between the random walkers, such that we obtain the celebrated linear fractional equation either for the probability density function of the walker at a certain position and time, or the mean number of walkers. The question arises how one can extend this equation to the non-linear case, where the random walkers interact. The aim of this work is to take into account this interaction under a mean-field approximation where the statistical properties of the random walker depend on the mean number of walkers. The implementation of these non-linear effects within the CTRW integral equations or fractional equations poses difficulties, leading to the alternative methodology we present in this work. We are concerned with non-linear effects which may either inhibit anomalous effects or induce them where they otherwise would not arise. Inhibition of these effects corresponds to a decrease in the waiting times of the random walkers, be this due to overcrowding, competition between walkers or an inherent carrying capacity of the system. Conversely, induced anomalous effects present longer waiting times and are consistent with symbiotic, collaborative or social walkers, or indirect pinpointing of favourable regions by their attractiveness. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Rumor Processes in Random Environment on and on Galton-Watson Trees
NASA Astrophysics Data System (ADS)
Bertacchi, Daniela; Zucca, Fabio
2013-11-01
The aim of this paper is to study rumor processes in random environment. In a rumor process a signal starts from the stations of a fixed vertex (the root) and travels on a graph from vertex to vertex. We consider two rumor processes. In the firework process each station, when reached by the signal, transmits it up to a random distance. In the reverse firework process, on the other hand, stations do not send any signal but they “listen” for it up to a random distance. The first random environment that we consider is the deterministic 1-dimensional tree with a random number of stations on each vertex; in this case the root is the origin of . We give conditions for the survival/extinction on almost every realization of the sequence of stations. Later on, we study the processes on Galton-Watson trees with random number of stations on each vertex. We show that if the probability of survival is positive, then there is survival on almost every realization of the infinite tree such that there is at least one station at the root. We characterize the survival of the process in some cases and we give sufficient conditions for survival/extinction.
A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves
NASA Astrophysics Data System (ADS)
Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang
2018-03-01
The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.
Listening to the noise: random fluctuations reveal gene network parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munsky, Brian; Khammash, Mustafa
2009-01-01
The cellular environment is abuzz with noise. The origin of this noise is attributed to the inherent random motion of reacting molecules that take part in gene expression and post expression interactions. In this noisy environment, clonal populations of cells exhibit cell-to-cell variability that frequently manifests as significant phenotypic differences within the cellular population. The stochastic fluctuations in cellular constituents induced by noise can be measured and their statistics quantified. We show that these random fluctuations carry within them valuable information about the underlying genetic network. Far from being a nuisance, the ever-present cellular noise acts as a rich sourcemore » of excitation that, when processed through a gene network, carries its distinctive fingerprint that encodes a wealth of information about that network. We demonstrate that in some cases the analysis of these random fluctuations enables the full identification of network parameters, including those that may otherwise be difficult to measure. This establishes a potentially powerful approach for the identification of gene networks and offers a new window into the workings of these networks.« less
Listening to the Noise: Random Fluctuations Reveal Gene Network Parameters
NASA Astrophysics Data System (ADS)
Munsky, Brian; Trinh, Brooke; Khammash, Mustafa
2010-03-01
The cellular environment is abuzz with noise originating from the inherent random motion of reacting molecules in the living cell. In this noisy environment, clonal cell populations exhibit cell-to-cell variability that can manifest significant prototypical differences. Noise induced stochastic fluctuations in cellular constituents can be measured and their statistics quantified using flow cytometry, single molecule fluorescence in situ hybridization, time lapse fluorescence microscopy and other single cell and single molecule measurement techniques. We show that these random fluctuations carry within them valuable information about the underlying genetic network. Far from being a nuisance, the ever-present cellular noise acts as a rich source of excitation that, when processed through a gene network, carries its distinctive fingerprint that encodes a wealth of information about that network. We demonstrate that in some cases the analysis of these random fluctuations enables the full identification of network parameters, including those that may otherwise be difficult to measure. We use theoretical investigations to establish experimental guidelines for the identification of gene regulatory networks, and we apply these guideline to experimentally identify predictive models for different regulatory mechanisms in bacteria and yeast.
A Mechanical Model of Brownian Motion for One Massive Particle Including Slow Light Particles
NASA Astrophysics Data System (ADS)
Liang, Song
2018-01-01
We provide a connection between Brownian motion and a classical mechanical system. Precisely, we consider a system of one massive particle interacting with an ideal gas, evolved according to non-random mechanical principles, via interaction potentials, without any assumption requiring that the initial velocities of the environmental particles should be restricted to be "fast enough". We prove the convergence of the (position, velocity)-process of the massive particle under a certain scaling limit, such that the mass of the environmental particles converges to 0 while the density and the velocities of them go to infinity, and give the precise expression of the limiting process, a diffusion process.
Time-dependent real space RG on the spin-1/2 XXZ chain
NASA Astrophysics Data System (ADS)
Mason, Peter; Zagoskin, Alexandre; Betouras, Joseph
In order to measure the spread of information in a system of interacting fermions with nearest-neighbour couplings and strong bond disorder, one could utilise a dynamical real space renormalisation group (RG) approach on the spin-1/2 XXZ chain. Under such a procedure, a many-body localised state is established as an infinite randomness fixed point and the entropy scales with time as log(log(t)). One interesting further question that results from such a study is the case when the Hamiltonian explicitly depends on time. Here we answer this question by considering a dynamical renormalisation group treatment on the strongly disordered random spin-1/2 XXZ chain where the couplings are time-dependent and chosen to reflect a (slow) evolution of the governing Hamiltonian. Under the condition that the renormalisation process occurs at fixed time, a set of coupled second order, nonlinear PDE's can be written down in terms of the random distributions of the bonds and fields. Solution of these flow equations at the relevant critical fixed points leads us to establish the dynamics of the flow as we sweep through the quantum critical point of the Hamiltonian. We will present these critical flows as well as discussing the issues of duality, entropy and many-body localisation.
Wheeler, David C.; Burstyn, Igor; Vermeulen, Roel; Yu, Kai; Shortreed, Susan M.; Pronk, Anjoeka; Stewart, Patricia A.; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Schwenn, Molly; Johnson, Alison; Silverman, Debra T.; Friesen, Melissa C.
2014-01-01
Objectives Evaluating occupational exposures in population-based case-control studies often requires exposure assessors to review each study participants' reported occupational information job-by-job to derive exposure estimates. Although such assessments likely have underlying decision rules, they usually lack transparency, are time-consuming and have uncertain reliability and validity. We aimed to identify the underlying rules to enable documentation, review, and future use of these expert-based exposure decisions. Methods Classification and regression trees (CART, predictions from a single tree) and random forests (predictions from many trees) were used to identify the underlying rules from the questionnaire responses and an expert's exposure assignments for occupational diesel exhaust exposure for several metrics: binary exposure probability and ordinal exposure probability, intensity, and frequency. Data were split into training (n=10,488 jobs), testing (n=2,247), and validation (n=2,248) data sets. Results The CART and random forest models' predictions agreed with 92–94% of the expert's binary probability assignments. For ordinal probability, intensity, and frequency metrics, the two models extracted decision rules more successfully for unexposed and highly exposed jobs (86–90% and 57–85%, respectively) than for low or medium exposed jobs (7–71%). Conclusions CART and random forest models extracted decision rules and accurately predicted an expert's exposure decisions for the majority of jobs and identified questionnaire response patterns that would require further expert review if the rules were applied to other jobs in the same or different study. This approach makes the exposure assessment process in case-control studies more transparent and creates a mechanism to efficiently replicate exposure decisions in future studies. PMID:23155187
Diversity of Poissonian populations.
Eliazar, Iddo I; Sokolov, Igor M
2010-01-01
Populations represented by collections of points scattered randomly on the real line are ubiquitous in science and engineering. The statistical modeling of such populations leads naturally to Poissonian populations-Poisson processes on the real line with a distinguished maximal point. Poissonian populations are infinite objects underlying key issues in statistical physics, probability theory, and random fractals. Due to their infiniteness, measuring the diversity of Poissonian populations depends on the lower-bound cut-off applied. This research characterizes the classes of Poissonian populations whose diversities are invariant with respect to the cut-off level applied and establishes an elemental connection between these classes and extreme-value theory. The measures of diversity considered are variance and dispersion, Simpson's index and inverse participation ratio, Shannon's entropy and Rényi's entropy, and Gini's index.
Information processing in dendrites I. Input pattern generalisation.
Gurney, K N
2001-10-01
In this paper and its companion, we address the question as to whether there are any general principles underlying information processing in the dendritic trees of biological neurons. In order to address this question, we make two assumptions. First, the key architectural feature of dendrites responsible for many of their information processing abilities is the existence of independent sub-units performing local non-linear processing. Second, any general functional principles operate at a level of abstraction in which neurons are modelled by Boolean functions. To accommodate these assumptions, we therefore define a Boolean model neuron-the multi-cube unit (MCU)-which instantiates the notion of the discrete functional sub-unit. We then use this model unit to explore two aspects of neural functionality: generalisation (in this paper) and processing complexity (in its companion). Generalisation is dealt with from a geometric viewpoint and is quantified using a new metric-the set of order parameters. These parameters are computed for threshold logic units (TLUs), a class of random Boolean functions, and MCUs. Our interpretation of the order parameters is consistent with our knowledge of generalisation in TLUs and with the lack of generalisation in randomly chosen functions. Crucially, the order parameters for MCUs imply that these functions possess a range of generalisation behaviour. We argue that this supports the general thesis that dendrites facilitate input pattern generalisation despite any local non-linear processing within functionally isolated sub-units.
Killing (absorption) versus survival in random motion
NASA Astrophysics Data System (ADS)
Garbaczewski, Piotr
2017-09-01
We address diffusion processes in a bounded domain, while focusing on somewhat unexplored affinities between the presence of absorbing and/or inaccessible boundaries. For the Brownian motion (Lévy-stable cases are briefly mentioned) model-independent features are established of the dynamical law that underlies the short-time behavior of these random paths, whose overall lifetime is predefined to be long. As a by-product, the limiting regime of a permanent trapping in a domain is obtained. We demonstrate that the adopted conditioning method, involving the so-called Bernstein transition function, works properly also in an unbounded domain, for stochastic processes with killing (Feynman-Kac kernels play the role of transition densities), provided the spectrum of the related semigroup operator is discrete. The method is shown to be useful in the case, when the spectrum of the generator goes down to zero and no isolated minimal (ground state) eigenvalue is in existence, like in the problem of the long-term survival on a half-line with a sink at origin.
NASA Astrophysics Data System (ADS)
Ishii, Yuichiro; Tanaka, Miki; Yabuuchi, Makoto; Sawada, Yohei; Tanaka, Shinji; Nii, Koji; Lu, Tien Yu; Huang, Chun Hsien; Sian Chen, Shou; Tse Kuo, Yu; Lung, Ching Cheng; Cheng, Osbert
2018-04-01
We propose a highly symmetrical 10 transistor (10T) 2-read/write (2RW) dual-port (DP) static random access memory (SRAM) bitcell in 28 nm high-k/metal-gate (HKMG) planar bulk CMOS. It replaces the conventional 8T 2RW DP SRAM bitcell without any area overhead. It significantly improves the robustness of process variations and an asymmetric issue between the true and bar bitline pairs. Measured data show that read current (I read) and read static noise margin (SNM) are respectively boosted by +20% and +15 mV by introducing the proposed bitcell with enlarged pull-down (PD) and pass-gate (PG) N-channel MOSs (NMOSs). The minimum operating voltage (V min) of the proposed 256 kbit 10T DP SRAM is 0.53 V in the TT process, 25 °C under the worst access condition with read/write disturbances, and improved by 90 mV (15%) compared with the conventional one.
Statistical mechanics of complex economies
NASA Astrophysics Data System (ADS)
Bardoscia, Marco; Livan, Giacomo; Marsili, Matteo
2017-04-01
In the pursuit of ever increasing efficiency and growth, our economies have evolved to remarkable degrees of complexity, with nested production processes feeding each other in order to create products of greater sophistication from less sophisticated ones, down to raw materials. The engine of such an expansion have been competitive markets that, according to general equilibrium theory (GET), achieve efficient allocations under specific conditions. We study large random economies within the GET framework, as templates of complex economies, and we find that a non-trivial phase transition occurs: the economy freezes in a state where all production processes collapse when either the number of primary goods or the number of available technologies fall below a critical threshold. As in other examples of phase transitions in large random systems, this is an unintended consequence of the growth in complexity. Our findings suggest that the Industrial Revolution can be regarded as a sharp transition between different phases, but also imply that well developed economies can collapse if too many intermediate goods are introduced.
A reduced-form intensity-based model under fuzzy environments
NASA Astrophysics Data System (ADS)
Wu, Liang; Zhuang, Yaming
2015-05-01
The external shocks and internal contagion are the important sources of default events. However, the external shocks and internal contagion effect on the company is not observed, we cannot get the accurate size of the shocks. The information of investors relative to the default process exhibits a certain fuzziness. Therefore, using randomness and fuzziness to study such problems as derivative pricing or default probability has practical needs. But the idea of fuzzifying credit risk models is little exploited, especially in a reduced-form model. This paper proposes a new default intensity model with fuzziness and presents a fuzzy default probability and default loss rate, and puts them into default debt and credit derivative pricing. Finally, the simulation analysis verifies the rationality of the model. Using fuzzy numbers and random analysis one can consider more uncertain sources in the default process of default and investors' subjective judgment on the financial markets in a variety of fuzzy reliability so as to broaden the scope of possible credit spreads.
Gillam, Ronald B.; Loeb, Diane Frome; Hoffman, LaVae M.; Bohman, Thomas; Champlin, Craig A.; Thibodeau, Linda; Widen, Judith; Brandel, Jayne; Friel-Patti, Sandy
2008-01-01
Purpose A randomized controlled trial (RCT) was conducted to compare the language and auditory processing outcomes of children assigned to Fast ForWord-Language (FFW-L) to the outcomes of children assigned to nonspecific or specific language intervention comparison treatments that did not contain modified speech. Method Two hundred and sixteen children between the ages of 6 and 9 years with language impairments were randomly assigned to one of four arms: Fast ForWord-Language (FFW-L), academic enrichment (AE), computer-assisted language intervention (CALI), or individualized language intervention (ILI) provided by a speech-language pathologist. All children received 1 hour and 40 minutes of treatment, 5 days per week, for 6 weeks. Language and auditory processing measures were administered to the children by blinded examiners before treatment, immediately after treatment, 3 months after treatment, and 6 months after treatment. Results The children in all four arms improved significantly on a global language test and a test of backward masking. Children with poor backward masking scores who were randomized to the FFW-L arm did not present greater improvement on the language measures than children with poor backward masking scores who were randomized to the other three arms. Effect sizes, analyses of standard error of measurement, and normalization percentages supported the clinical significance of the improvements on the CASL. There was a treatment effect for the Blending Words subtest on the Comprehensive Test of Phonological Processing (Wagner, Torgesen, & Rashotte, 1999). Participants in the FFW-L and CALI arms earned higher phonological awareness scores than children in the ILI and AE arms at the six-month follow-up testing. Conclusion Fast ForWord-Language, the language intervention that provided modified speech to address a hypothesized underlying auditory processing deficit, was not more effective at improving general language skills or temporal processing skills than a nonspecific comparison treatment (AE) or specific language intervention comparison treatments (CALI and ILI) that did not contain modified speech stimuli. These findings call into question the temporal processing hypothesis of language impairment and the hypothesized benefits of using acoustically modified speech to improve language skills. The finding that children in the three treatment arms and the active comparison arm made clinically relevant gains on measures of language and temporal auditory processing informs our understanding of the variety of intervention activities that can facilitate development. PMID:18230858
A mathematical study of a random process proposed as an atmospheric turbulence model
NASA Technical Reports Server (NTRS)
Sidwell, K.
1977-01-01
A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.
Significant locations in auxiliary data as seeds for typical use cases of point clustering
NASA Astrophysics Data System (ADS)
Kröger, Johannes
2018-05-01
Random greedy clustering and grid-based clustering are highly susceptible by their initial parameters. When used for point data clustering in maps they often change the apparent distribution of the underlying data. We propose a process that uses precomputed weighted seed points for the initialization of clusters, for example from local maxima in population density data. Exemplary results from the clustering of a dataset of petrol stations are presented.
Random bits, true and unbiased, from atmospheric turbulence
Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo
2014-01-01
Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499
Kendall, W.L.; Nichols, J.D.; Hines, J.E.
1997-01-01
Statistical inference for capture-recapture studies of open animal populations typically relies on the assumption that all emigration from the studied population is permanent. However, there are many instances in which this assumption is unlikely to be met. We define two general models for the process of temporary emigration, completely random and Markovian. We then consider effects of these two types of temporary emigration on Jolly-Seber (Seber 1982) estimators and on estimators arising from the full-likelihood approach of Kendall et al. (1995) to robust design data. Capture-recapture data arising from Pollock's (1982) robust design provide the basis for obtaining unbiased estimates of demographic parameters in the presence of temporary emigration and for estimating the probability of temporary emigration. We present a likelihood-based approach to dealing with temporary emigration that permits estimation under different models of temporary emigration and yields tests for completely random and Markovian emigration. In addition, we use the relationship between capture probability estimates based on closed and open models under completely random temporary emigration to derive three ad hoc estimators for the probability of temporary emigration, two of which should be especially useful in situations where capture probabilities are heterogeneous among individual animals. Ad hoc and full-likelihood estimators are illustrated for small mammal capture-recapture data sets. We believe that these models and estimators will be useful for testing hypotheses about the process of temporary emigration, for estimating demographic parameters in the presence of temporary emigration, and for estimating probabilities of temporary emigration. These latter estimates are frequently of ecological interest as indicators of animal movement and, in some sampling situations, as direct estimates of breeding probabilities and proportions.
Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab
2014-08-25
We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.
Generalized master equations for non-Poisson dynamics on networks.
Hoffmann, Till; Porter, Mason A; Lambiotte, Renaud
2012-10-01
The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.
Generalized master equations for non-Poisson dynamics on networks
NASA Astrophysics Data System (ADS)
Hoffmann, Till; Porter, Mason A.; Lambiotte, Renaud
2012-10-01
The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.
NASA Astrophysics Data System (ADS)
Hatarik, Robert; Caggiano, J. A.; Callahan, D.; Casey, D.; Clark, D.; Doeppner, T.; Eckart, M.; Field, J.; Frenje, J.; Gatu Johnson, M.; Grim, G.; Hartouni, E.; Hurricane, O.; Kilkenny, J.; Knauer, J.; Ma, T.; Mannion, O.; Munro, D.; Sayre, D.; Spears, B.
2015-11-01
The method of moments was introduced by Pearson as a process for estimating the population distributions from which a set of ``random variables'' are measured. These moments are compared with a parameterization of the distributions, or of the same quantities generated by simulations of the process. Most diagnostics processes extract scalar parameters depending on the moments of spectra derived from analytic solutions to the fusion rate, necessarily based on simplifying assumptions of the confined plasma. The precision of the TOF spectra, and the nature of the implosions at the NIF require the inclusion of factors beyond the traditional analysis and require the addition of higher order moments to describe the data. This talk will present a diagnostic process for extracting the moments of the neutron energy spectrum for a comparison with theoretical considerations as well as simulations of the implosions. Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei
2013-07-01
Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.
Stochastic tools hidden behind the empirical dielectric relaxation laws
NASA Astrophysics Data System (ADS)
Stanislavsky, Aleksander; Weron, Karina
2017-03-01
The paper is devoted to recent advances in stochastic modeling of anomalous kinetic processes observed in dielectric materials which are prominent examples of disordered (complex) systems. Theoretical studies of dynamical properties of ‘structures with variations’ (Goldenfield and Kadanoff 1999 Science 284 87-9) require application of such mathematical tools—by means of which their random nature can be analyzed and, independently of the details distinguishing various systems (dipolar materials, glasses, semiconductors, liquid crystals, polymers, etc), the empirical universal kinetic patterns can be derived. We begin with a brief survey of the historical background of the dielectric relaxation study. After a short outline of the theoretical ideas providing the random tools applicable to modeling of relaxation phenomena, we present probabilistic implications for the study of the relaxation-rate distribution models. In the framework of the probability distribution of relaxation rates we consider description of complex systems, in which relaxing entities form random clusters interacting with each other and single entities. Then we focus on stochastic mechanisms of the relaxation phenomenon. We discuss the diffusion approach and its usefulness for understanding of anomalous dynamics of relaxing systems. We also discuss extensions of the diffusive approach to systems under tempered random processes. Useful relationships among different stochastic approaches to the anomalous dynamics of complex systems allow us to get a fresh look at this subject. The paper closes with a final discussion on achievements of stochastic tools describing the anomalous time evolution of complex systems.
Hebestreit, Julia M; May, Arne
2017-12-19
Beta-blockers are a first choice migraine preventive medication. So far it is unknown how they exert their therapeutic effect in migraine. To this end we examined the neural effect of metoprolol on trigeminal pain processing in 19 migraine patients and 26 healthy controls. All participants underwent functional magnetic resonance imaging (fMRI) during trigeminal pain twice: Healthy subjects took part in a placebo-controlled, randomized and double-blind study, receiving a single dose of metoprolol and placebo. Patients were examined with a baseline scan before starting the preventive medication and 3 months later whilst treated with metoprolol. Mean pain intensity ratings were not significantly altered under metoprolol. Functional imaging revealed no significant differences in nociceptive processing in both groups. Contrary to earlier findings from animal studies, we did not find an effect of metoprolol on the thalamus in either group. However, using a more liberal and exploratory threshold, hypothalamic activity was slightly increased under metoprolol in patients and migraineurs. No significant effect of metoprolol on trigeminal pain processing was observed, suggesting a peripheral effect of metoprolol. Exploratory analyses revealed slightly enhanced hypothalamic activity under metoprolol in both groups. Given the emerging role of the hypothalamus in migraine attack generation, these data need further examination.
Studies in astronomical time series analysis: Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
Fernández, Alejandro; Mascayano, Franco; Lips, Walter; Painel, Andrés; Norambuena, Jonathan; Madrid, Eva
2015-06-30
Modafinil is a drug developed and used for the treatment of excessive lethargy. Even though very effective for sleep disorders, it is still controversial whether modafinil can improve performance in high-order cognitive processes such as memory and executive function. This randomized, double-blind, placebo-controlled, crossover trial was designed to evaluate the effect of modafinil (compared to placebo) on the cognitive functions of healthy students. 160 volunteers were recruited and allocated randomly to modafinil or placebo group, and were assessed using the Stroop Test, BCET test and Digit span test. We found a significant difference in favor of modafinil compared to placebo in the proportion of correct answers of Stroop Test in congruent situation. A significant shorter latency of modafinil group in the incongruent situation of Stroop test was also found. No differences were found in Digit Span, or BCET tests. The study demonstrated that modafinil does not enhance the global cognitive performance of healthy non-sleep deprived students, except regarding non-demanding tasks. In particular, this drug does not seem to have positive effects on mental processes that sustain studying tasks in the college population under normal conditions. We expect these findings to demystify the use of this drug and help decision making concerning pharmacological public policies.
VNIR hyperspectral background characterization methods in adverse weather conditions
NASA Astrophysics Data System (ADS)
Romano, João M.; Rosario, Dalton; Roth, Luz
2009-05-01
Hyperspectral technology is currently being used by the military to detect regions of interest where potential targets may be located. Weather variability, however, may affect the ability for an algorithm to discriminate possible targets from background clutter. Nonetheless, different background characterization approaches may facilitate the ability for an algorithm to discriminate potential targets over a variety of weather conditions. In a previous paper, we introduced a new autonomous target size invariant background characterization process, the Autonomous Background Characterization (ABC) or also known as the Parallel Random Sampling (PRS) method, features a random sampling stage, a parallel process to mitigate the inclusion by chance of target samples into clutter background classes during random sampling; and a fusion of results at the end. In this paper, we will demonstrate how different background characterization approaches are able to improve performance of algorithms over a variety of challenging weather conditions. By using the Mahalanobis distance as the standard algorithm for this study, we compare the performance of different characterization methods such as: the global information, 2 stage global information, and our proposed method, ABC, using data that was collected under a variety of adverse weather conditions. For this study, we used ARDEC's Hyperspectral VNIR Adverse Weather data collection comprised of heavy, light, and transitional fog, light and heavy rain, and low light conditions.
Farthouat, Juliane; Franco, Ana; Mary, Alison; Delpouve, Julie; Wens, Vincent; Op de Beeck, Marc; De Tiège, Xavier; Peigneux, Philippe
2017-03-01
Humans are highly sensitive to statistical regularities in their environment. This phenomenon, usually referred as statistical learning, is most often assessed using post-learning behavioural measures that are limited by a lack of sensibility and do not monitor the temporal dynamics of learning. In the present study, we used magnetoencephalographic frequency-tagged responses to investigate the neural sources and temporal development of the ongoing brain activity that supports the detection of regularities embedded in auditory streams. Participants passively listened to statistical streams in which tones were grouped as triplets, and to random streams in which tones were randomly presented. Results show that during exposure to statistical (vs. random) streams, tritone frequency-related responses reflecting the learning of regularities embedded in the stream increased in the left supplementary motor area and left posterior superior temporal sulcus (pSTS), whereas tone frequency-related responses decreased in the right angular gyrus and right pSTS. Tritone frequency-related responses rapidly developed to reach significance after 3 min of exposure. These results suggest that the incidental extraction of novel regularities is subtended by a gradual shift from rhythmic activity reflecting individual tone succession toward rhythmic activity synchronised with triplet presentation, and that these rhythmic processes are subtended by distinct neural sources.
Yiu, Sean; Farewell, Vernon T; Tom, Brian D M
2017-08-01
Many psoriatic arthritis patients do not progress to permanent joint damage in any of the 28 hand joints, even under prolonged follow-up. This has led several researchers to fit models that estimate the proportion of stayers (those who do not have the propensity to experience the event of interest) and to characterize the rate of developing damaged joints in the movers (those who have the propensity to experience the event of interest). However, when fitted to the same data, the paper demonstrates that the choice of model for the movers can lead to widely varying conclusions on a stayer population, thus implying that, if interest lies in a stayer population, a single analysis should not generally be adopted. The aim of the paper is to provide greater understanding regarding estimation of a stayer population by comparing the inferences, performance and features of multiple fitted models to real and simulated data sets. The models for the movers are based on Poisson processes with patient level random effects and/or dynamic covariates, which are used to induce within-patient correlation, and observation level random effects are used to account for time varying unobserved heterogeneity. The gamma, inverse Gaussian and compound Poisson distributions are considered for the random effects.
An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response
Stipčević, Mario; Ursin, Rupert
2015-01-01
Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576
Langbein, John O.
2012-01-01
Recent studies have documented that global positioning system (GPS) time series of position estimates have temporal correlations which have been modeled as a combination of power-law and white noise processes. When estimating quantities such as a constant rate from GPS time series data, the estimated uncertainties on these quantities are more realistic when using a noise model that includes temporal correlations than simply assuming temporally uncorrelated noise. However, the choice of the specific representation of correlated noise can affect the estimate of uncertainty. For many GPS time series, the background noise can be represented by either: (1) a sum of flicker and random-walk noise or, (2) as a power-law noise model that represents an average of the flicker and random-walk noise. For instance, if the underlying noise model is a combination of flicker and random-walk noise, then incorrectly choosing the power-law model could underestimate the rate uncertainty by a factor of two. Distinguishing between the two alternate noise models is difficult since the flicker component can dominate the assessment of the noise properties because it is spread over a significant portion of the measurable frequency band. But, although not necessarily detectable, the random-walk component can be a major constituent of the estimated rate uncertainty. None the less, it is possible to determine the upper bound on the random-walk noise.
Cai, Tianxi; Karlson, Elizabeth W.
2013-01-01
Objectives To test whether data extracted from full text patient visit notes from an electronic medical record (EMR) would improve the classification of PsA compared to an algorithm based on codified data. Methods From the > 1,350,000 adults in a large academic EMR, all 2318 patients with a billing code for PsA were extracted and 550 were randomly selected for chart review and algorithm training. Using codified data and phrases extracted from narrative data using natural language processing, 31 predictors were extracted and three random forest algorithms trained using coded, narrative, and combined predictors. The receiver operator curve (ROC) was used to identify the optimal algorithm and a cut point was chosen to achieve the maximum sensitivity possible at a 90% positive predictive value (PPV). The algorithm was then used to classify the remaining 1768 charts and finally validated in a random sample of 300 cases predicted to have PsA. Results The PPV of a single PsA code was 57% (95%CI 55%–58%). Using a combination of coded data and NLP the random forest algorithm reached a PPV of 90% (95%CI 86%–93%) at sensitivity of 87% (95% CI 83% – 91%) in the training data. The PPV was 93% (95%CI 89%–96%) in the validation set. Adding NLP predictors to codified data increased the area under the ROC (p < 0.001). Conclusions Using NLP with text notes from electronic medical records improved the performance of the prediction algorithm significantly. Random forests were a useful tool to accurately classify psoriatic arthritis cases to enable epidemiological research. PMID:20701955
Random Walks in a One-Dimensional Lévy Random Environment
NASA Astrophysics Data System (ADS)
Bianchi, Alessandra; Cristadoro, Giampaolo; Lenci, Marco; Ligabò, Marilena
2016-04-01
We consider a generalization of a one-dimensional stochastic process known in the physical literature as Lévy-Lorentz gas. The process describes the motion of a particle on the real line in the presence of a random array of marked points, whose nearest-neighbor distances are i.i.d. and long-tailed (with finite mean but possibly infinite variance). The motion is a continuous-time, constant-speed interpolation of a symmetric random walk on the marked points. We first study the quenched random walk on the point process, proving the CLT and the convergence of all the accordingly rescaled moments. Then we derive the quenched and annealed CLTs for the continuous-time process.
Determining Scale-dependent Patterns in Spatial and Temporal Datasets
NASA Astrophysics Data System (ADS)
Roy, A.; Perfect, E.; Mukerji, T.; Sylvester, L.
2016-12-01
Spatial and temporal datasets of interest to Earth scientists often contain plots of one variable against another, e.g., rainfall magnitude vs. time or fracture aperture vs. spacing. Such data, comprised of distributions of events along a transect / timeline along with their magnitudes, can display persistent or antipersistent trends, as well as random behavior, that may contain signatures of underlying physical processes. Lacunarity is a technique that was originally developed for multiscale analysis of data. In a recent study we showed that lacunarity can be used for revealing changes in scale-dependent patterns in fracture spacing data. Here we present a further improvement in our technique, with lacunarity applied to various non-binary datasets comprised of event spacings and magnitudes. We test our technique on a set of four synthetic datasets, three of which are based on an autoregressive model and have magnitudes at every point along the "timeline" thus representing antipersistent, persistent, and random trends. The fourth dataset is made up of five clusters of events, each containing a set of random magnitudes. The concept of lacunarity ratio, LR, is introduced; this is the lacunarity of a given dataset normalized to the lacunarity of its random counterpart. It is demonstrated that LR can successfully delineate scale-dependent changes in terms of antipersistence and persistence in the synthetic datasets. This technique is then applied to three different types of data: a hundred-year rainfall record from Knoxville, TN, USA, a set of varved sediments from Marca Shale, and a set of fracture aperture and spacing data from NE Mexico. While the rainfall data and varved sediments both appear to be persistent at small scales, at larger scales they both become random. On the other hand, the fracture data shows antipersistence at small scale (within cluster) and random behavior at large scales. Such differences in behavior with respect to scale-dependent changes in antipersistence to random, persistence to random, or otherwise, maybe be related to differences in the physicochemical properties and processes contributing to multiscale datasets.
NASA Astrophysics Data System (ADS)
Rusakov, Oleg; Laskin, Michael
2017-06-01
We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.
Feynman-Kac formula for stochastic hybrid systems.
Bressloff, Paul C
2017-01-01
We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.
Processing and characterization of α-elastin electrospun membranes
NASA Astrophysics Data System (ADS)
Araujo, J.; Padrão, J.; Silva, J. P.; Dourado, F.; Correia, D. M.; Botelho, G.; Gomez Ribelles, J. L.; Lanceros-Méndez, S.; Sencadas, V.
2014-06-01
Elastin isolated from fresh bovine ligaments was dissolved in a mixture of 1,1,1,3,3,3-Hexafluoro-2-propanol and water were electrospun into fiber membranes under different processing conditions. Fiber mats of randomly and aligned fibers were obtained with fixed and rotating ground collectors and fibrils were composed by thin ribbons whose width depends on electrospinning conditions; fibrils with 721 nm up to 2.12 μm width were achieved. After cross-linking with glutaraldehyde, α-elastin can uptake as much as 1700 % of PBS solution and a slight increase on fiber thickness was observed. The glass transition temperature of electrospun fiber mats was found to occur at ˜80 °C. Moreover, α-Elastin showed to be a perfect elastomeric material, and no mechanical hysteresis was found in cycle mechanical measurements. The elastic modulus obtained for random and aligned fibers mats in a PBS solution was 330±10 kPa and 732±165 kPa, respectively. Finally, the electrospinning and cross-linking process does not inhibit MC-3T3-E1 cell adhesion. Cell culture results showed good cell adhesion and proliferation in the cross-linked elastin fiber mats.
NASA Astrophysics Data System (ADS)
Koltai, Péter; Renger, D. R. Michiel
2018-06-01
One way to analyze complicated non-autonomous flows is through trying to understand their transport behavior. In a quantitative, set-oriented approach to transport and mixing, finite time coherent sets play an important role. These are time-parametrized families of sets with unlikely transport to and from their surroundings under small or vanishing random perturbations of the dynamics. Here we propose, as a measure of transport and mixing for purely advective (i.e., deterministic) flows, (semi)distances that arise under vanishing perturbations in the sense of large deviations. Analogously, for given finite Lagrangian trajectory data we derive a discrete-time-and-space semidistance that comes from the "best" approximation of the randomly perturbed process conditioned on this limited information of the deterministic flow. It can be computed as shortest path in a graph with time-dependent weights. Furthermore, we argue that coherent sets are regions of maximal farness in terms of transport and mixing, and hence they occur as extremal regions on a spanning structure of the state space under this semidistance—in fact, under any distance measure arising from the physical notion of transport. Based on this notion, we develop a tool to analyze the state space (or the finite trajectory data at hand) and identify coherent regions. We validate our approach on idealized prototypical examples and well-studied standard cases.
Mutual synchronization of weakly coupled gyrotrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozental, R. M.; Glyavin, M. Yu.; Sergeev, A. S.
2015-09-15
The processes of synchronization of two weakly coupled gyrotrons are studied within the framework of non-stationary equations with non-fixed longitudinal field structure. With the allowance for a small difference of the free oscillation frequencies of the gyrotrons, we found a certain range of parameters where mutual synchronization is possible while a high electronic efficiency is remained. It is also shown that synchronization regimes can be realized even under random fluctuations of the parameters of the electron beams.
The Effect of Practice Schedule on Context-Dependent Learning.
Lee, Ya-Yun; Fisher, Beth E
2018-03-02
It is well established that random practice compared to blocked practice enhances motor learning. Additionally, while information in the environment may be incidental, learning is also enhanced when an individual performs a task within the same environmental context in which the task was originally practiced. This study aimed to disentangle the effects of practice schedule and incidental/environmental context on motor learning. Participants practiced three finger sequences under either a random or blocked practice schedule. Each sequence was associated with specific incidental context (i.e., color and location on the computer screen) during practice. The participants were tested under the conditions when the sequence-context associations remained the same or were changed from that of practice. When the sequence-context association was changed, the participants who practiced under blocked schedule demonstrated greater performance decrement than those who practiced under random schedule. The findings suggested that those participants who practiced under random schedule were more resistant to the change of environmental context.
Cell cultivation under different gravitational loads using a novel random positioning incubator
Benavides Damm, Tatiana; Walther, Isabelle; Wüest, Simon L; Sekler, Jörg; Egli, Marcel
2014-01-01
Important in biotechnology is the establishment of cell culture methods that reflect the in vivo situation accurately. One approach for reaching this goal is through 3D cell cultivation that mimics tissue or organ structures and functions. We present here a newly designed and constructed random positioning incubator (RPI) that enables 3D cell culture in simulated microgravity (0 g). In addition to growing cells in a weightlessness-like environment, our RPI enables long-duration cell cultivation under various gravitational loads, ranging from close to 0 g to almost 1 g. This allows the study of the mechanotransductional process of cells involved in the conversion of physical forces to an appropriate biochemical response. Gravity is a type of physical force with profound developmental implications in cellular systems as it modulates the resulting signaling cascades as a consequence of mechanical loading. The experiments presented here were conducted on mouse skeletal myoblasts and human lymphocytes, two types of cells that have been shown in the past to be particularly sensitive to changes in gravity. Our novel RPI will expand the horizon at which mechanobiological experiments are conducted. The scientific data gathered may not only improve the sustainment of human life in space, but also lead to the design of alternative countermeasures against diseases related to impaired mechanosensation and downstream signaling processes on earth. PMID:24375199
Cellular registration without behavioral recall of olfactory sensory input under general anesthesia.
Samuelsson, Andrew R; Brandon, Nicole R; Tang, Pei; Xu, Yan
2014-04-01
Previous studies suggest that sensory information is "received" but not "perceived" under general anesthesia. Whether and to what extent the brain continues to process sensory inputs in a drug-induced unconscious state remain unclear. One hundred seven rats were randomly assigned to 12 different anesthesia and odor exposure paradigms. The immunoreactivities of the immediate early gene products c-Fos and Egr1 as neural activity markers were combined with behavioral tests to assess the integrity and relationship of cellular and behavioral responsiveness to olfactory stimuli under a surgical plane of ketamine-xylazine general anesthesia. The olfactory sensory processing centers could distinguish the presence or absence of experimental odorants even when animals were fully anesthetized. In the anesthetized state, the c-Fos immunoreactivity in the higher olfactory cortices revealed a difference between novel and familiar odorants similar to that seen in the awake state, suggesting that the anesthetized brain functions beyond simply receiving external stimulation. Reexposing animals to odorants previously experienced only under anesthesia resulted in c-Fos immunoreactivity, which was similar to that elicited by familiar odorants, indicating that previous registration had occurred in the anesthetized brain. Despite the "cellular memory," however, odor discrimination and forced-choice odor-recognition tests showed absence of behavioral recall of the registered sensations, except for a longer latency in odor recognition tests. Histologically distinguishable registration of sensory processing continues to occur at the cellular level under ketamine-xylazine general anesthesia despite the absence of behavioral recognition, consistent with the notion that general anesthesia causes disintegration of information processing without completely blocking cellular communications.
Graf, Heiko; Metzger, Coraline D; Walter, Martin; Abler, Birgit
2016-01-06
Investigating the effects of serotonergic antidepressants on neural correlates of visual erotic stimulation revealed decreased reactivity within the dopaminergic reward network along with decreased subjective sexual functioning compared with placebo. However, a global dampening of the reward system under serotonergic drugs is not intuitive considering clinical observations of their beneficial effects in the treatment of depression. Particularly, learning signals as coded in prediction error processing within the dopaminergic reward system can be assumed to be rather enhanced as antidepressant drugs have been demonstrated to facilitate the efficacy of psychotherapeutic interventions relying on learning processes. Within the same study sample, we now explored the effects of serotonergic and dopaminergic/noradrenergic antidepressants on prediction error signals compared with placebo by functional MRI. A total of 17 healthy male participants (mean age: 25.4 years) were investigated under the administration of paroxetine, bupropion and placebo for 7 days each within a randomized, double-blind, within-subject cross-over design. During functional MRI, we used an established monetary incentive task to explore neural prediction error signals within the bilateral nucleus accumbens as region of interest within the dopaminergic reward system. In contrast to diminished neural activations and subjective sexual functioning under the serotonergic agent paroxetine under visual erotic stimulation, we revealed unaffected or even enhanced neural prediction error processing within the nucleus accumbens under this antidepressant along with unaffected behavioural processing. Our study provides evidence that serotonergic antidepressants facilitate prediction error signalling and may support suggestions of beneficial effects of these agents on reinforced learning as an essential element in behavioural psychotherapy.
Cellular Registration Without Behavioral Recall Of Olfactory Sensory Input Under General Anesthesia
Samuelsson, Andrew R.; Brandon, Nicole R.; Tang, Pei; Xu, Yan
2014-01-01
Background Previous studies suggest that sensory information is “received” but not “perceived” under general anesthesia. Whether and to what extent the brain continues to process sensory inputs in a drug-induced unconscious state remain unclear. Methods 107 rats were randomly assigned to 12 different anesthesia and odor exposure paradigms. The immunoreactivities of the immediate early gene products c-Fos and Egr1 as neural activity markers were combined with behavioral tests to assess the integrity and relationship of cellular and behavioral responsiveness to olfactory stimuli under a surgical plane of ketamine-xylazine general anesthesia. Results The olfactory sensory processing centers can distinguish the presence or absence of experimental odorants even when animals were fully anesthetized. In the anesthetized state, the c-Fos immunoreactivity in the higher olfactory cortices revealed a difference between novel and familiar odorants similar to that seen in the awake state, suggesting that the anesthetized brain functions beyond simply receiving external stimulation. Re-exposing animals to odorants previously experienced only under anesthesia resulted in c-Fos immunoreactivity similar to that elicited by familiar odorants, indicating that previous registration had occurred in the anesthetized brain. Despite the “cellular memory,” however, odor discrimination and forced-choice odor-recognition tests showed absence of behavioral recall of the registered sensations, except for a longer latency in odor recognition tests. Conclusions Histologically distinguishable registration of sensory process continues to occur at cellular level under ketamine-xylazine general anesthesia despite the absence of behavioral recognition, consistent with the notion that general anesthesia causes disintegration of information processing without completely blocking cellular communications. PMID:24694846
A qualitative assessment of a random process proposed as an atmospheric turbulence model
NASA Technical Reports Server (NTRS)
Sidwell, K.
1977-01-01
A random process is formed by the product of two Gaussian processes and the sum of that product with a third Gaussian process. The resulting total random process is interpreted as the sum of an amplitude modulated process and a slowly varying, random mean value. The properties of the process are examined, including an interpretation of the process in terms of the physical structure of atmospheric motions. The inclusion of the mean value variation gives an improved representation of the properties of atmospheric motions, since the resulting process can account for the differences in the statistical properties of atmospheric velocity components and their gradients. The application of the process to atmospheric turbulence problems, including the response of aircraft dynamic systems, is examined. The effects of the mean value variation upon aircraft loads are small in most cases, but can be important in the measurement and interpretation of atmospheric turbulence data.
NASA Astrophysics Data System (ADS)
Yoon, Heonjun; Kim, Miso; Park, Choon-Su; Youn, Byeng D.
2018-01-01
Piezoelectric vibration energy harvesting (PVEH) has received much attention as a potential solution that could ultimately realize self-powered wireless sensor networks. Since most ambient vibrations in nature are inherently random and nonstationary, the output performances of PVEH devices also randomly change with time. However, little attention has been paid to investigating the randomly time-varying electroelastic behaviors of PVEH systems both analytically and experimentally. The objective of this study is thus to make a step forward towards a deep understanding of the time-varying performances of PVEH devices under nonstationary random vibrations. Two typical cases of nonstationary random vibration signals are considered: (1) randomly-varying amplitude (amplitude modulation; AM) and (2) randomly-varying amplitude with randomly-varying instantaneous frequency (amplitude and frequency modulation; AM-FM). In both cases, this study pursues well-balanced correlations of analytical predictions and experimental observations to deduce the relationships between the time-varying output performances of the PVEH device and two primary input parameters, such as a central frequency and an external electrical resistance. We introduce three correlation metrics to quantitatively compare analytical prediction and experimental observation, including the normalized root mean square error, the correlation coefficient, and the weighted integrated factor. Analytical predictions are in an excellent agreement with experimental observations both mechanically and electrically. This study provides insightful guidelines for designing PVEH devices to reliably generate electric power under nonstationary random vibrations.
Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun
2017-08-01
Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2 = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.
Gothe, Neha P; Kramer, Arthur F; McAuley, Edward
2017-01-01
Age-related cognitive decline is well documented across various aspects of cognitive function, including attention and processing speed, and lifestyle behaviors such as physical activity play an important role in preventing cognitive decline and maintaining or even improving cognitive function. The purpose of this study was to evaluate the effects of an 8-week Hatha yoga intervention on attention and processing speed among older adults. Participants (n = 118; mean age, 62 ± 5.59) were randomly assigned to an 8-week Hatha yoga group or a stretching control group and completed cognitive assessments-Attention Network Task, Trail Making Test parts A and B, and Pattern Comparison Test-at baseline and after the 8-week intervention. Analyses of covariance revealed significantly faster reaction times for the yoga group on the Attention Network Task's neutral, congruent, and incongruent conditions (p ≤ 0.04). The yoga intervention also improved participants' visuospatial and perceptual processing on the Trail Making Test part B (p = 0.002) and pattern comparison (p < 0.001) tests. These results suggest that yoga practice that includes postures, breathing, and meditative exercises lead to improved attentional and information processing abilities. Although the underlying mechanisms remain largely speculative, more systematic trials are needed to explore the extent of cognitive benefits and their neurobiological mechanisms.
OBSIFRAC: database-supported software for 3D modeling of rock mass fragmentation
NASA Astrophysics Data System (ADS)
Empereur-Mot, Luc; Villemin, Thierry
2003-03-01
Under stress, fractures in rock masses tend to form fully connected networks. The mass can thus be thought of as a 3D series of blocks produced by fragmentation processes. A numerical model has been developed that uses a relational database to describe such a mass. The model, which assumes the fractures to be plane, allows data from natural networks to test theories concerning fragmentation processes. In the model, blocks are bordered by faces that are composed of edges and vertices. A fracture can originate from a seed point, its orientation being controlled by the stress field specified by an orientation matrix. Alternatively, it can be generated from a discrete set of given orientations and positions. Both kinds of fracture can occur together in a model. From an original simple block, a given fracture produces two simple polyhedral blocks, and the original block becomes compound. Compound and simple blocks created throughout fragmentation are stored in the database. Several fragmentation processes have been studied. In one scenario, a constant proportion of blocks is fragmented at each step of the process. The resulting distribution appears to be fractal, although seed points are random in each fragmented block. In a second scenario, division affects only one random block at each stage of the process, and gives a Weibull volume distribution law. This software can be used for a large number of other applications.
Kandler, Anne; Shennan, Stephen
2015-12-06
Cultural change can be quantified by temporal changes in frequency of different cultural artefacts and it is a central question to identify what underlying cultural transmission processes could have caused the observed frequency changes. Observed changes, however, often describe the dynamics in samples of the population of artefacts, whereas transmission processes act on the whole population. Here we develop a modelling framework aimed at addressing this inference problem. To do so, we firstly generate population structures from which the observed sample could have been drawn randomly and then determine theoretical samples at a later time t2 produced under the assumption that changes in frequencies are caused by a specific transmission process. Thereby we also account for the potential effect of time-averaging processes in the generation of the observed sample. Subsequent statistical comparisons (e.g. using Bayesian inference) of the theoretical and observed samples at t2 can establish which processes could have produced the observed frequency data. In this way, we infer underlying transmission processes directly from available data without any equilibrium assumption. We apply this framework to a dataset describing pottery from settlements of some of the first farmers in Europe (the LBK culture) and conclude that the observed frequency dynamic of different types of decorated pottery is consistent with age-dependent selection, a preference for 'young' pottery types which is potentially indicative of fashion trends. © 2015 The Author(s).
Unified underpinning of human mobility in the real world and cyberspace
NASA Astrophysics Data System (ADS)
Zhao, Yi-Ming; Zeng, An; Yan, Xiao-Yong; Wang, Wen-Xu; Lai, Ying-Cheng
2016-05-01
Human movements in the real world and in cyberspace affect not only dynamical processes such as epidemic spreading and information diffusion but also social and economical activities such as urban planning and personalized recommendation in online shopping. Despite recent efforts in characterizing and modeling human behaviors in both the real and cyber worlds, the fundamental dynamics underlying human mobility have not been well understood. We develop a minimal, memory-based random walk model in limited space for reproducing, with a single parameter, the key statistical behaviors characterizing human movements in both cases. The model is validated using relatively big data from mobile phone and online commerce, suggesting memory-based random walk dynamics as the unified underpinning for human mobility, regardless of whether it occurs in the real world or in cyberspace.
NASA Astrophysics Data System (ADS)
Sidorova, Mariia; Semenov, Alexej; Hübers, Heinz-Wilhelm; Charaev, Ilya; Kuzmin, Artem; Doerner, Steffen; Siegel, Michael
2017-11-01
We studied timing jitter in the appearance of photon counts in meandering nanowires with different fractional amount of bends. Intrinsic timing jitter, which is the probability density function of the random time delay between photon absorption in current-carrying superconducting nanowire and appearance of the normal domain, reveals two different underlying physical mechanisms. In the deterministic regime, which is realized at large photon energies and large currents, jitter is controlled by position-dependent detection threshold in straight parts of meanders. It decreases with the increase in the current. At small photon energies, jitter increases and its current dependence disappears. In this probabilistic regime jitter is controlled by Poisson process in that magnetic vortices jump randomly across the wire in areas adjacent to the bends.
Robust Tomography using Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Silva, Marcus; Kimmel, Shelby; Johnson, Blake; Ryan, Colm; Ohki, Thomas
2013-03-01
Conventional randomized benchmarking (RB) can be used to estimate the fidelity of Clifford operations in a manner that is robust against preparation and measurement errors -- thus allowing for a more accurate and relevant characterization of the average error in Clifford gates compared to standard tomography protocols. Interleaved RB (IRB) extends this result to the extraction of error rates for individual Clifford gates. In this talk we will show how to combine multiple IRB experiments to extract all information about the unital part of any trace preserving quantum process. Consequently, one can compute the average fidelity to any unitary, not just the Clifford group, with tighter bounds than IRB. Moreover, the additional information can be used to design improvements in control. MS, BJ, CR and TO acknowledge support from IARPA under contract W911NF-10-1-0324.
Correlated continuous time random walk and option pricing
NASA Astrophysics Data System (ADS)
Lv, Longjin; Xiao, Jianbin; Fan, Liangzhong; Ren, Fuyao
2016-04-01
In this paper, we study a correlated continuous time random walk (CCTRW) with averaged waiting time, whose probability density function (PDF) is proved to follow stretched Gaussian distribution. Then, we apply this process into option pricing problem. Supposing the price of the underlying is driven by this CCTRW, we find this model captures the subdiffusive characteristic of financial markets. By using the mean self-financing hedging strategy, we obtain the closed-form pricing formulas for a European option with and without transaction costs, respectively. At last, comparing the obtained model with the classical Black-Scholes model, we find the price obtained in this paper is higher than that obtained from the Black-Scholes model. A empirical analysis is also introduced to confirm the obtained results can fit the real data well.
Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.
2012-08-01
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
Random heteropolymers preserve protein function in foreign environments
NASA Astrophysics Data System (ADS)
Panganiban, Brian; Qiao, Baofu; Jiang, Tao; DelRe, Christopher; Obadia, Mona M.; Nguyen, Trung Dac; Smith, Anton A. A.; Hall, Aaron; Sit, Izaac; Crosby, Marquise G.; Dennis, Patrick B.; Drockenmuller, Eric; Olvera de la Cruz, Monica; Xu, Ting
2018-03-01
The successful incorporation of active proteins into synthetic polymers could lead to a new class of materials with functions found only in living systems. However, proteins rarely function under the conditions suitable for polymer processing. On the basis of an analysis of trends in protein sequences and characteristic chemical patterns on protein surfaces, we designed four-monomer random heteropolymers to mimic intrinsically disordered proteins for protein solubilization and stabilization in non-native environments. The heteropolymers, with optimized composition and statistical monomer distribution, enable cell-free synthesis of membrane proteins with proper protein folding for transport and enzyme-containing plastics for toxin bioremediation. Controlling the statistical monomer distribution in a heteropolymer, rather than the specific monomer sequence, affords a new strategy to interface with biological systems for protein-based biomaterials.
Singh, Vishwajeet; Sinha, Rahul Janak; Sankhwar, S N; Malik, Anita
2011-01-01
A prospective randomized study was executed to compare the surgical parameters and stone clearance in patients who underwent percutaneous nephrolithotomy (PNL) under combined spinal-epidural anesthesia (CSEA) versus those who underwent PNL under general anesthesia (GA). Between January 2008 to December 2009, 64 patients with renal calculi were randomized into 2 groups and evaluated for the purpose of this study. Group 1 consisted of patients who underwent PNL under CSEA and Group 2 consisted of patients who underwent PNL under GA. The operative time, stone clearance rate, visual pain analog score, mean analgesic dose and mean hospital stay were compared amongst other parameters. The difference between visual pain analog score after the operation and the dose of analgesic requirement was significant on statistical analysis between both groups. PNL under CSEA is as effective and safe as PNL under GA. Patients who undergo PNL under CESA require lesser analgesic dose and have a shorter hospital stay. Copyright © 2011 S. Karger AG, Basel.
Accurate Diabetes Risk Stratification Using Machine Learning: Role of Missing Value and Outliers.
Maniruzzaman, Md; Rahman, Md Jahanur; Al-MehediHasan, Md; Suri, Harman S; Abedin, Md Menhazul; El-Baz, Ayman; Suri, Jasjit S
2018-04-10
Diabetes mellitus is a group of metabolic diseases in which blood sugar levels are too high. About 8.8% of the world was diabetic in 2017. It is projected that this will reach nearly 10% by 2045. The major challenge is that when machine learning-based classifiers are applied to such data sets for risk stratification, leads to lower performance. Thus, our objective is to develop an optimized and robust machine learning (ML) system under the assumption that missing values or outliers if replaced by a median configuration will yield higher risk stratification accuracy. This ML-based risk stratification is designed, optimized and evaluated, where: (i) the features are extracted and optimized from the six feature selection techniques (random forest, logistic regression, mutual information, principal component analysis, analysis of variance, and Fisher discriminant ratio) and combined with ten different types of classifiers (linear discriminant analysis, quadratic discriminant analysis, naïve Bayes, Gaussian process classification, support vector machine, artificial neural network, Adaboost, logistic regression, decision tree, and random forest) under the hypothesis that both missing values and outliers when replaced by computed medians will improve the risk stratification accuracy. Pima Indian diabetic dataset (768 patients: 268 diabetic and 500 controls) was used. Our results demonstrate that on replacing the missing values and outliers by group median and median values, respectively and further using the combination of random forest feature selection and random forest classification technique yields an accuracy, sensitivity, specificity, positive predictive value, negative predictive value and area under the curve as: 92.26%, 95.96%, 79.72%, 91.14%, 91.20%, and 0.93, respectively. This is an improvement of 10% over previously developed techniques published in literature. The system was validated for its stability and reliability. RF-based model showed the best performance when outliers are replaced by median values.
Aging in mortal superdiffusive Lévy walkers.
Stage, Helena
2017-12-01
A growing body of literature examines the effects of superdiffusive subballistic movement premeasurement (aging or time lag) on observations arising from single-particle tracking. A neglected aspect is the finite lifetime of these Lévy walkers, be they proteins, cells, or larger structures. We examine the effects of aging on the motility of mortal walkers, and discuss the means by which permanent stopping of walkers may be categorized as arising from "natural" death or experimental artifacts such as low photostability or radiation damage. This is done by comparison of the walkers' mean squared displacement (MSD) with the front velocity of propagation of a group of walkers, which is found to be invariant under time lags. For any running time distribution of a mortal random walker, the MSD is tempered by the stopping rate θ. This provides a physical interpretation for truncated heavy-tailed diffusion processes and serves as a tool by which to better classify the underlying running time distributions of random walkers. Tempering of aged MSDs raises the issue of misinterpreting superdiffusive motion which appears Brownian or subdiffusive over certain time scales.
Data-Enabled Quantification of Aluminum Microstructural Damage Under Tensile Loading
NASA Astrophysics Data System (ADS)
Wayne, Steven F.; Qi, G.; Zhang, L.
2016-08-01
The study of material failure with digital analytics is in its infancy and offers a new perspective to advance our understanding of damage initiation and evolution in metals. In this article, we study the failure of aluminum using data-enabled methods, statistics and data mining. Through the use of tension tests, we establish a multivariate acoustic-data matrix of random damage events, which typically are not visible and are very difficult to measure due to their variability, diversity and interactivity during damage processes. Aluminium alloy 6061-T651 and single crystal aluminium with a (111) orientation were evaluated by comparing the collection of acoustic signals from damage events caused primarily by slip in the single crystal and multimode fracture of the alloy. We found the resulting acoustic damage-event data to be large semi-structured volumes of Big Data with the potential to be mined for information that describes the materials damage state under strain. Our data-enabled analyses has allowed us to determine statistical distributions of multiscale random damage that provide a means to quantify the material damage state.
On the Asymmetric Zero-Range in the Rarefaction Fan
NASA Astrophysics Data System (ADS)
Gonçalves, Patrícia
2014-02-01
We consider one-dimensional asymmetric zero-range processes starting from a step decreasing profile leading, in the hydrodynamic limit, to the rarefaction fan of the associated hydrodynamic equation. Under that initial condition, and for totally asymmetric jumps, we show that the weighted sum of joint probabilities for second class particles sharing the same site is convergent and we compute its limit. For partially asymmetric jumps, we derive the Law of Large Numbers for a second class particle, under the initial configuration in which all positive sites are empty, all negative sites are occupied with infinitely many first class particles and there is a single second class particle at the origin. Moreover, we prove that among the infinite characteristics emanating from the position of the second class particle it picks randomly one of them. The randomness is given in terms of the weak solution of the hydrodynamic equation, through some sort of renormalization function. By coupling the constant-rate totally asymmetric zero-range with the totally asymmetric simple exclusion, we derive limiting laws for more general initial conditions.
Aging in mortal superdiffusive Lévy walkers
NASA Astrophysics Data System (ADS)
Stage, Helena
2017-12-01
A growing body of literature examines the effects of superdiffusive subballistic movement premeasurement (aging or time lag) on observations arising from single-particle tracking. A neglected aspect is the finite lifetime of these Lévy walkers, be they proteins, cells, or larger structures. We examine the effects of aging on the motility of mortal walkers, and discuss the means by which permanent stopping of walkers may be categorized as arising from "natural" death or experimental artifacts such as low photostability or radiation damage. This is done by comparison of the walkers' mean squared displacement (MSD) with the front velocity of propagation of a group of walkers, which is found to be invariant under time lags. For any running time distribution of a mortal random walker, the MSD is tempered by the stopping rate θ . This provides a physical interpretation for truncated heavy-tailed diffusion processes and serves as a tool by which to better classify the underlying running time distributions of random walkers. Tempering of aged MSDs raises the issue of misinterpreting superdiffusive motion which appears Brownian or subdiffusive over certain time scales.
Improving performances of suboptimal greedy iterative biclustering heuristics via localization.
Erten, Cesim; Sözdinler, Melih
2010-10-15
Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
Diffusion-advection within dynamic biological gaps driven by structural motion
NASA Astrophysics Data System (ADS)
Asaro, Robert J.; Zhu, Qiang; Lin, Kuanpo
2018-04-01
To study the significance of advection in the transport of solutes, or particles, within thin biological gaps (channels), we examine theoretically the process driven by stochastic fluid flow caused by random thermal structural motion, and we compare it with transport via diffusion. The model geometry chosen resembles the synaptic cleft; this choice is motivated by the cleft's readily modeled structure, which allows for well-defined mechanical and physical features that control the advection process. Our analysis defines a Péclet-like number, AD, that quantifies the ratio of time scales of advection versus diffusion. Another parameter, AM, is also defined by the analysis that quantifies the full potential extent of advection in the absence of diffusion. These parameters provide a clear and compact description of the interplay among the well-defined structural, geometric, and physical properties vis-a ̀-vis the advection versus diffusion process. For example, it is found that AD˜1 /R2 , where R is the cleft diameter and hence diffusion distance. This curious, and perhaps unexpected, result follows from the dependence of structural motion that drives fluid flow on R . AM, on the other hand, is directly related (essentially proportional to) the energetic input into structural motion, and thereby to fluid flow, as well as to the mechanical stiffness of the cleftlike structure. Our model analysis thus provides unambiguous insight into the prospect of competition of advection versus diffusion within biological gaplike structures. The importance of the random, versus a regular, nature of structural motion and of the resulting transient nature of advection under random motion is made clear in our analysis. Further, by quantifying the effects of geometric and physical properties on the competition between advection and diffusion, our results clearly demonstrate the important role that metabolic energy (ATP) plays in this competitive process.
Weak convergence to isotropic complex [Formula: see text] random measure.
Wang, Jun; Li, Yunmeng; Sang, Liheng
2017-01-01
In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.
Olson, Daniel W.; Dutta, Sarit; Laachi, Nabil; Tian, Mingwei; Dorfman, Kevin D.
2011-01-01
Using the two-state, continuous-time random walk model, we develop expressions for the mobility and the plate height during DNA electrophoresis in an ordered post array that delineate the contributions due to (i) the random distance between collisions and (ii) the random duration of a collision. These contributions are expressed in terms of the means and variances of the underlying stochastic processes, which we evaluate from a large ensemble of Brownian dynamics simulations performed using different electric fields and molecular weights in a hexagonal array of 1 μm posts with a 3 μm center-to-center distance. If we fix the molecular weight, we find that the collision frequency governs the mobility. In contrast, the average collision duration is the most important factor for predicting the mobility as a function of DNA size at constant Péclet number. The plate height is reasonably well-described by a single post rope-over-pulley model, provided that the extension of the molecule is small. Our results only account for dispersion inside the post array and thus represent a theoretical lower bound on the plate height in an actual device. PMID:21290387
Carvalho, Adriana Assis; Costa, Luciane Rezende
2013-12-10
Little is known about the views of mothers when their children are invited to participate in randomized clinical trials (RCTs) investigating medicines and/or invasive procedures. Our goal was to understand mothers' perceptions of the processes of informed consent and randomization in a RCT that divided uncooperative children into three intervention groups (physical restraint, sedation, and general anesthesia) for dental rehabilitation. This is a qualitative study based on semi-structured interviews with mothers accompanying children under 3 years old presenting severe early childhood caries. Their responses were analyzed using content analysis. We identified one major theme from 15 mothers' responses - "Understanding of, attitudes toward, and feelings about consenting to participate in a RCT involving advanced behavior guidance techniques and about randomization" - that was derived from the following subcategories: confusion in defining techniques, questions after signing the consent form, lack of knowledge about the techniques, acceptance or questioning of the drawing, sharing responsibility with the child during the drawing, and feelings of faith in God, fear, powerlessness to choose, and relief from or an increase in pressure. Despite mothers' misunderstanding, vulnerability, and contradictory feelings, they were willing to overlook their thoughts in order to complete their children's dental treatment.
Palacios, Julia A; Minin, Vladimir N
2013-03-01
Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.
Coherence-generating power of quantum dephasing processes
NASA Astrophysics Data System (ADS)
Styliaris, Georgios; Campos Venuti, Lorenzo; Zanardi, Paolo
2018-03-01
We provide a quantification of the capability of various quantum dephasing processes to generate coherence out of incoherent states. The measures defined, admitting computable expressions for any finite Hilbert-space dimension, are based on probabilistic averages and arise naturally from the viewpoint of coherence as a resource. We investigate how the capability of a dephasing process (e.g., a nonselective orthogonal measurement) to generate coherence depends on the relevant bases of the Hilbert space over which coherence is quantified and the dephasing process occurs, respectively. We extend our analysis to include those Lindblad time evolutions which, in the infinite-time limit, dephase the system under consideration and calculate their coherence-generating power as a function of time. We further identify specific families of such time evolutions that, although dephasing, have optimal (over all quantum processes) coherence-generating power for some intermediate time. Finally, we investigate the coherence-generating capability of random dephasing channels.
Statistical Modeling of Single Target Cell Encapsulation
Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan
2011-01-01
High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548
Rai, Krishna Kumar; Rai, Nagendra; Rai, Shashi Pandey
2018-07-01
Salicylic acid (SA) and sodium nitroprusside (SNP, NO donor) modulates plant growth and development processes and recent findings have also revealed their involvement in the regulation of epigenetic factors under stress condition. In the present study, some of these factors were comparatively studied in hyacinth bean plants subjected to high temperature (HT) environment (40-42 °C) with and without exogenous application of SA and SNP under field condition. Exogenous application of SA and SNP substantially modulated the growth and biophysical process of hyacinth bean plants under HT environment. Exogenous application of SA and SNP also remarkably regulated the activities of antioxidant enzymes, modulated mRNA level of certain enzymes, improves plant water relation, enhance photosynthesis and thereby increasing plant defence under HT. Coupled restriction enzyme digestion-random amplification (CRED-RA) technique revealed that many methylation changes were "dose dependent" and HT significantly increased DNA damages as evidenced by both increase and decrease in bands profiles, methylation and de-methylation pattern. Thus, the result of the present study clearly shows that exogenous SA and SNP regulates DNA methylation pattern, modulates stress-responsive genes and can impart transient HT tolerance by synchronizing growth and physiological acclimatization of plants, thus narrowing the gaps between physio-biochemical and molecular events in addressing HT tolerance. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
On the mapping associated with the complex representation of functions and processes.
NASA Technical Reports Server (NTRS)
Harger, R. O.
1972-01-01
The mapping between function spaces that is implied by the representation of a real 'bandpass' function by a complex 'low-pass' function is explicitly accepted. The discussion is extended to the representation of stationary random processes where the mapping is between spaces of random processes. This approach clarifies the nature of the complex representation, especially in the case of random processes and, in addition, derives the properties of the complex representation.-
Probability of stress-corrosion fracture under random loading
NASA Technical Reports Server (NTRS)
Yang, J. N.
1974-01-01
Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.
Convergence to equilibrium under a random Hamiltonian.
Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek
2012-09-01
We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.
Convergence to equilibrium under a random Hamiltonian
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek
2012-09-01
We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.
Flexible organic light-emitting devices with a smooth and transparent silver nanowire electrode
NASA Astrophysics Data System (ADS)
Cui, Hai-Feng; Zhang, Yi-Fan; Li, Chuan-Nan
2014-07-01
We demonstrate a flexible organic light-emitting device (OLED) by using silver nanowire (AgNW) transparent electrode. A template stripping process has been employed to fabricate the AgNW electrode on a photopolymer substrate. From this approach, a random AgNW network electrode can be transferred to the flexible substrate and its roughness has been successfully decreased. As a result, the devices obtained by this method exhibit high efficiency. In addition, the flexible OLEDs keep good performance under a small bending radius.
Framing effects under cognitive load: the role of working memory in risky decisions.
Whitney, Paul; Rinehart, Christa A; Hinson, John M
2008-12-01
Framing effects occur in a wide range of laboratory and natural decision contexts, but the underlying processes that produce framing effects are not well understood. We explored the role of working memory (WM) in framing by manipulating WM loads during risky decisions. After starting with a hypothetical stake of money, participants were then presented a lesser amount that they could keep for certain (positive frame) or lose for certain (negative frame). They made a choice between the sure amount and a gamble in which they could either keep or lose all of the original stake. On half of the trials, the choice was made while maintaining a concurrent WM load of random letters. In both load and no-load conditions, we replicated the typical finding of risk aversion with positive frames and risk seeking with negative frames. In addition, people made fewer decisions to accept the gamble under conditions of higher cognitive load. The data are congruent with a dual-process reasoning framework in which people employ a heuristic to make satisfactory decisions with minimal effort.
Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel
2012-06-01
We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.
Double jeopardy in inferring cognitive processes
Fific, Mario
2014-01-01
Inferences we make about underlying cognitive processes can be jeopardized in two ways due to problematic forms of aggregation. First, averaging across individuals is typically considered a very useful tool for removing random variability. The threat is that averaging across subjects leads to averaging across different cognitive strategies, thus harming our inferences. The second threat comes from the construction of inadequate research designs possessing a low diagnostic accuracy of cognitive processes. For that reason we introduced the systems factorial technology (SFT), which has primarily been designed to make inferences about underlying processing order (serial, parallel, coactive), stopping rule (terminating, exhaustive), and process dependency. SFT proposes that the minimal research design complexity to learn about n number of cognitive processes should be equal to 2n. In addition, SFT proposes that (a) each cognitive process should be controlled by a separate experimental factor, and (b) The saliency levels of all factors should be combined in a full factorial design. In the current study, the author cross combined the levels of jeopardies in a 2 × 2 analysis, leading to four different analysis conditions. The results indicate a decline in the diagnostic accuracy of inferences made about cognitive processes due to the presence of each jeopardy in isolation and when combined. The results warrant the development of more individual subject analyses and the utilization of full-factorial (SFT) experimental designs. PMID:25374545
Scaling Limits and Generic Bounds for Exploration Processes
NASA Astrophysics Data System (ADS)
Bermolen, Paola; Jonckheere, Matthieu; Sanders, Jaron
2017-12-01
We consider exploration algorithms of the random sequential adsorption type both for homogeneous random graphs and random geometric graphs based on spatial Poisson processes. At each step, a vertex of the graph becomes active and its neighboring nodes become blocked. Given an initial number of vertices N growing to infinity, we study statistical properties of the proportion of explored (active or blocked) nodes in time using scaling limits. We obtain exact limits for homogeneous graphs and prove an explicit central limit theorem for the final proportion of active nodes, known as the jamming constant, through a diffusion approximation for the exploration process which can be described as a unidimensional process. We then focus on bounding the trajectories of such exploration processes on random geometric graphs, i.e., random sequential adsorption. As opposed to exploration processes on homogeneous random graphs, these do not allow for such a dimensional reduction. Instead we derive a fundamental relationship between the number of explored nodes and the discovered volume in the spatial process, and we obtain generic bounds for the fluid limit and jamming constant: bounds that are independent of the dimension of space and the detailed shape of the volume associated to the discovered node. Lastly, using coupling techinques, we give trajectorial interpretations of the generic bounds.
Allen, Ashleigh A; Chen, Donna T; Bonnie, Richard J; Ko, Tomohiro M; Suratt, Colleen E; Lee, Joshua D; Friedmann, Peter D; Gordon, Michael; McDonald, Ryan; Murphy, Sean M; Boney, Tamara Y; Nunes, Edward V; O'Brien, Charles P
2017-10-01
Concerns persist that individuals with substance use disorders who are under community criminal justice supervision experience circumstances that might compromise their provision of valid, informed consent for research participation. These concerns include the possibilities that desire to obtain access to treatment might lead individuals to ignore important information about research participation, including information about risks, or that cognitive impairment associated with substance use might interfere with attending to important information. We report results from a consent quiz (CQ) administered in a multisite randomized clinical trial of long-acting naltrexone to prevent relapse to opioid use disorder among adults under community criminal justice supervision-a treatment option difficult to access by this population of individuals. Participants were required to answer all 11 items correctly before randomization. On average, participants answered 9.8 items correctly (89%) at baseline first attempt (n=306). At week 21 (n=212), participants scored 87% (9.5 items correct) without review. Performance was equivalent to, or better than, published results from other populations on a basic consent quiz instrument across multiple content domains. The consent quiz is an efficient method to screen for adequate knowledge of consent information as part of the informed consent process. Clinical researchers who are concerned about these issues should consider using a consent quiz with corrected feedback to enhance the informed consent process. Overall, while primarily useful as an educational tool, employing a CQ as part of the gateway to participation in research may be particularly important as the field continues to advance and tests novel experimental treatments with significant risks and uncertain potential for benefit. Copyright © 2017. Published by Elsevier Inc.
Harrison, Xavier A
2015-01-01
Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels), I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed poorly when models contained <5 levels of the random intercept term, especially for estimating variance components, and this effect appeared independent of total sample size. These results suggest that OLRE are a useful tool for modelling overdispersion in Binomial data, but that they do not perform well in all circumstances and researchers should take care to verify the robustness of parameter estimates of OLRE models.
Friction Stir Back Extrusion of Aluminium Alloys for Automotive Applications
NASA Astrophysics Data System (ADS)
Xu, Zeren
Since the invention of Friction Stir Welding in 1991 as a solid state joining technique, extensive scientific investigations have been carried out to understand fundamental aspects of material behaviors when processed by this technique, in order to optimize processing conditions as well as mechanical properties of the welds. Based on the basic principles of Friction Stir Welding, several derivatives have also been developed such as Friction Stir Processing, Friction Extrusion and Friction Stir Back Extrusion. Friction Stir Back Extrusion is a novel technique that is proposed recently and designed for fabricating tubes from lightweight alloys. Some preliminary results have been reported regarding microstructure and mechanical properties of Friction Stir Back Extrusion processed AZ31 magnesium alloy, however, systematic study and in-depth investigations are still needed to understand the materials behaviors and underlying mechanisms when subjected to Friction Stir Back Extrusion, especially for age-hardenable Al alloys. In the present study, Friction Stir Back Extrusion processed AA6063-T5 and AA7075-T6 alloys are analyzed with respect to grain structure evolution, micro-texture change, recrystallization mechanisms, precipitation sequence as well as mechanical properties. Optical Microscopy, Electron Backscatter Diffraction, Transmission Electron Microscopy, Vickers Hardness measurements and uniaxial tensile tests are carried out to characterize the microstructural change as well as micro and macro mechanical properties of the processed tubes. Special attention is paid to the micro-texture evolution across the entire tube and dynamic recrystallization mechanisms that are responsible for grain refinement. Significant grain refinement has been observed near the processing zone while the tube wall is characterized by inhomogeneous grain structure across the thickness for both alloys. Dissolution of existing precipitates is noticed under the thermal hysterias imposed by Friction Stir Back Extrusion process, resulting in decreased strength but improved elongation of the processed tubes; a post-process aging step can effectively restore the mechanical properties of the processed tubes by allowing for the reprecipitation of solute elements in the form of fine, dispersed precipitates. Texture analysis performed for AA6063 alloy suggests the dominance of simple shear type textures with clear transition from initial texture to stable B/ ?B components via intermediate types that are stable under moderate strain levels. In order to identify the texture components properly, rigid body rotations are applied to the existing coordinate system to align it to local shear reference frame. Surprisingly, for AA7075 tubes, and fibers are observed to be the dominant texture components in the transition region as well as thermomechanically affected zone while the processing zone is characterized by random texture. The underlying mechanisms responsible for the formation of random texture are discussed in Chapter 5 based on Electron Backscatter Diffraction analysis. Comparative discussions are also carried out for the recrystallization mechanisms that are responsible for grain structure evolution of both alloys. Continuous grain subdivision and reorientation is cited as the dominant mechanism for the recrystallization of AA6063 alloys, while dynamic recrystallization occurs mainly in the form of Geometric Dynamic Recrystallization and progressive subgrain rotations near grain boundaries in AA7075 alloys.
Spreading Effect in Industrial Complex Network Based on Revised Structural Holes Theory
Ye, Qing; Guan, Jun
2016-01-01
This paper analyzed the spreading effect of industrial sectors with complex network model under perspective of econophysics. Input-output analysis, as an important research tool, focuses more on static analysis. However, the fundamental aim of industry analysis is to figure out how interaction between different industries makes impacts on economic development, which turns out to be a dynamic process. Thus, industrial complex network based on input-output tables from WIOD is proposed to be a bridge connecting accurate static quantitative analysis and comparable dynamic one. With application of revised structural holes theory, flow betweenness and random walk centrality were respectively chosen to evaluate industrial sectors’ long-term and short-term spreading effect process in this paper. It shows that industries with higher flow betweenness or random walk centrality would bring about more intensive industrial spreading effect to the industrial chains they stands in, because value stream transmission of industrial sectors depends on how many products or services it can get from the other ones, and they are regarded as brokers with bigger information superiority and more intermediate interests. PMID:27218468
NASA Astrophysics Data System (ADS)
Jian, Wen-Yi; You, Hsin-Chiang; Wu, Cheng-Yen
2018-01-01
In this work, we used a sol-gel process to fabricate a ZnO-ZrO2-stacked resistive switching random access memory (ReRAM) device and investigated its switching mechanism. The Gibbs free energy in ZnO, which is higher than that in ZrO2, facilitates the oxidation and reduction reactions of filaments in the ZnO layer. The current-voltage (I-V) characteristics of the device revealed a forming-free operation because of nonlattice oxygen in the oxide layer. In addition, the device can operate under bipolar or unipolar conditions with a reset voltage of 0 to ±2 V, indicating that in this device, Joule heating dominates at reset and the electric field dominates in the set process. Furthermore, the characteristics reveal why the fabricated device exhibits a greater discrete distribution phenomenon for the set voltage than for the reset voltage. These results will enable the fabrication of future ReRAM devices with double-layer oxide structures with improved characteristics.
Spreading Effect in Industrial Complex Network Based on Revised Structural Holes Theory.
Xing, Lizhi; Ye, Qing; Guan, Jun
2016-01-01
This paper analyzed the spreading effect of industrial sectors with complex network model under perspective of econophysics. Input-output analysis, as an important research tool, focuses more on static analysis. However, the fundamental aim of industry analysis is to figure out how interaction between different industries makes impacts on economic development, which turns out to be a dynamic process. Thus, industrial complex network based on input-output tables from WIOD is proposed to be a bridge connecting accurate static quantitative analysis and comparable dynamic one. With application of revised structural holes theory, flow betweenness and random walk centrality were respectively chosen to evaluate industrial sectors' long-term and short-term spreading effect process in this paper. It shows that industries with higher flow betweenness or random walk centrality would bring about more intensive industrial spreading effect to the industrial chains they stands in, because value stream transmission of industrial sectors depends on how many products or services it can get from the other ones, and they are regarded as brokers with bigger information superiority and more intermediate interests.
Monte-Carlo simulations of the clean and disordered contact process in three space dimensions
NASA Astrophysics Data System (ADS)
Vojta, Thomas
2013-03-01
The absorbing-state transition in the three-dimensional contact process with and without quenched randomness is investigated by means of Monte-Carlo simulations. In the clean case, a reweighting technique is combined with a careful extrapolation of the data to infinite time to determine with high accuracy the critical behavior in the three-dimensional directed percolation universality class. In the presence of quenched spatial disorder, our data demonstrate that the absorbing-state transition is governed by an unconventional infinite-randomness critical point featuring activated dynamical scaling. The critical behavior of this transition does not depend on the disorder strength, i.e., it is universal. Close to the disordered critical point, the dynamics is characterized by the nonuniversal power laws typical of a Griffiths phase. We compare our findings to the results of other numerical methods, and we relate them to a general classification of phase transitions in disordered systems based on the rare region dimensionality. This work has been supported in part by the NSF under grants no. DMR-0906566 and DMR-1205803.
c-Fos expression predicts long-term social memory retrieval in mice.
Lüscher Dias, Thomaz; Fernandes Golino, Hudson; Moura de Oliveira, Vinícius Elias; Dutra Moraes, Márcio Flávio; Schenatto Pereira, Grace
2016-10-15
The way the rodent brain generally processes socially relevant information is rather well understood. How social information is stored into long-term social memory, however, is still under debate. Here, brain c-Fos expression was measured after adult mice were exposed to familiar or novel juveniles and expression was compared in several memory and socially relevant brain areas. Machine Learning algorithm Random Forest was then used to predict the social interaction category of adult mice based on c-Fos expression in these areas. Interaction with a familiar co-specific altered brain activation in the olfactory bulb, amygdala, hippocampus, lateral septum and medial prefrontal cortex. Remarkably, Random Forest was able to predict interaction with a familiar juvenile with 100% accuracy. Activity in the olfactory bulb, amygdala, hippocampus and the medial prefrontal cortex were crucial to this prediction. From our results, we suggest long-term social memory depends on initial social olfactory processing in the medial amygdala and its output connections synergistically with non-social contextual integration by the hippocampus and medial prefrontal cortex top-down modulation of primary olfactory structures. Copyright © 2016 Elsevier B.V. All rights reserved.
Effects of ignition location models on the burn patterns of simulated wildfires
Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.
2011-01-01
Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.
Soshi, Takahiro; Nakajima, Heizo; Hagiwara, Hiroko
2016-10-01
Static knowledge about the grammar of a natural language is represented in the cortico-subcortical system. However, the differences in dynamic verbal processing under different cognitive conditions are unclear. To clarify this, we conducted an electrophysiological experiment involving a semantic priming paradigm in which semantically congruent or incongruent word sequences (prime nouns-target verbs) were randomly presented. We examined the event-related brain potentials that occurred in response to congruent and incongruent target words that were preceded by primes with or without grammatical case markers. The two participant groups performed either the shallow (lexical judgment) or deep (direct semantic judgment) semantic tasks. We hypothesized that, irrespective of the case markers, the congruent targets would reduce centro-posterior N400 activities under the deep semantic condition, which induces selective attention to the semantic relatedness of content words. However, the same congruent targets with correct case markers would reduce lateralized negativity under the shallow semantic condition because grammatical case markers are related to automatic structural integration under semantically unattended conditions. We observed that congruent targets (e.g., 'open') that were preceded by primes with congruent case markers (e.g., 'shutter-object case') reduced lateralized negativity under the shallow semantic condition. In contrast, congruent targets, irrespective of case markers, consistently yielded N400 reductions under the deep semantic condition. To summarize, human neural verbal processing differed in response to the same grammatical markers in the same verbal expressions under semantically attended or unattended conditions.
A new class of random processes with application to helicopter noise
NASA Technical Reports Server (NTRS)
Hardin, Jay C.; Miamee, A. G.
1989-01-01
The concept of dividing random processes into classes (e.g., stationary, locally stationary, periodically correlated, and harmonizable) has long been employed. A new class of random processes is introduced which includes many of these processes as well as other interesting processes which fall into none of the above classes. Such random processes are denoted as linearly correlated. This class is shown to include the familiar stationary and periodically correlated processes as well as many other, both harmonizable and non-harmonizable, nonstationary processes. When a process is linearly correlated for all t and harmonizable, its two-dimensional power spectral density S(x) (omega 1, omega 2) is shown to take a particularly simple form, being non-zero only on lines such that omega 1 to omega 2 = + or - r(k) where the r(k's) are (not necessarily equally spaced) roots of a characteristic function. The relationship of such processes to the class of stationary processes is examined. In addition, the application of such processes in the analysis of typical helicopter noise signals is described.
A new class of random processes with application to helicopter noise
NASA Technical Reports Server (NTRS)
Hardin, Jay C.; Miamee, A. G.
1989-01-01
The concept of dividing random processes into classes (e.g., stationary, locally stationary, periodically correlated, and harmonizable) has long been employed. A new class of random processes is introduced which includes many of these processes as well as other interesting processes which fall into none of the above classes. Such random processes are denoted as linearly correlated. This class is shown to include the familiar stationary and periodically correlated processes as well as many other, both harmonizable and non-harmonizable, nonstationary processes. When a process is linearly correlated for all t and harmonizable, its two-dimensional power spectral density S(x)(omega 1, omega 2) is shown to take a particularly simple form, being non-zero only on lines such that omega 1 to omega 2 = + or - r(k) where the r(k's) are (not necessarily equally spaced) roots of a characteristic function. The relationship of such processes to the class of stationary processes is examined. In addition, the application of such processes in the analysis of typical helicopter noise signals is described.
Neutral null models for diversity in serial transfer evolution experiments.
Harpak, Arbel; Sella, Guy
2014-09-01
Evolution experiments with microorganisms coupled with genome-wide sequencing now allow for the systematic study of population genetic processes under a wide range of conditions. In learning about these processes in natural, sexual populations, neutral models that describe the behavior of diversity and divergence summaries have played a pivotal role. It is therefore natural to ask whether neutral models, suitably modified, could be useful in the context of evolution experiments. Here, we introduce coalescent models for polymorphism and divergence under the most common experimental evolution assay, a serial transfer experiment. This relatively simple setting allows us to address several issues that could affect diversity patterns in evolution experiments, whether selection is operating or not: the transient behavior of neutral polymorphism in an experiment beginning from a single clone, the effects of randomness in the timing of cell division and noisiness in population size in the dilution stage. In our analyses and discussion, we emphasize the implications for experiments aimed at measuring diversity patterns and making inferences about population genetic processes based on these measurements. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Autonomous unobtrusive detection of mild cognitive impairment in older adults.
Akl, Ahmad; Taati, Babak; Mihailidis, Alex
2015-05-01
The current diagnosis process of dementia is resulting in a high percentage of cases with delayed detection. To address this problem, in this paper, we explore the feasibility of autonomously detecting mild cognitive impairment (MCI) in the older adult population. We implement a signal processing approach equipped with a machine learning paradigm to process and analyze real-world data acquired using home-based unobtrusive sensing technologies. Using the sensor and clinical data pertaining to 97 subjects, acquired over an average period of three years, a number of measures associated with the subjects' walking speed and general activity in the home were calculated. Different time spans of these measures were used to generate feature vectors to train and test two machine learning algorithms namely support vector machines and random forests. We were able to autonomously detect MCI in older adults with an area under the ROC curve of 0.97 and an area under the precision-recall curve of 0.93 using a time window of 24 weeks. This study is of great significance since it can potentially assist in the early detection of cognitive impairment in older adults.
Universality in the dynamical properties of seismic vibrations
NASA Astrophysics Data System (ADS)
Chatterjee, Soumya; Barat, P.; Mukherjee, Indranil
2018-02-01
We have studied the statistical properties of the observed magnitudes of seismic vibration data in discrete time in an attempt to understand the underlying complex dynamical processes. The observed magnitude data are taken from six different geographical locations. All possible magnitudes are considered in the analysis including catastrophic vibrations, foreshocks, aftershocks and commonplace daily vibrations. The probability distribution functions of these data sets obey scaling law and display a certain universality characteristic. To investigate the universality features in the observed data generated by a complex process, we applied Random Matrix Theory (RMT) in the framework of Gaussian Orthogonal Ensemble (GOE). For all these six places the observed data show a close fit with the predictions of RMT. This reinforces the idea of universality in the dynamical processes generating seismic vibrations.
Multiple-predators-based capture process on complex networks
NASA Astrophysics Data System (ADS)
Ramiz Sharafat, Rajput; Pu, Cunlai; Li, Jie; Chen, Rongbin; Xu, Zhongqi
2017-03-01
The predator/prey (capture) problem is a prototype of many network-related applications. We study the capture process on complex networks by considering multiple predators from multiple sources. In our model, some lions start from multiple sources simultaneously to capture the lamb by biased random walks, which are controlled with a free parameter $\\alpha$. We derive the distribution of the lamb's lifetime and the expected lifetime $\\left\\langle T\\right\\rangle $. Through simulation, we find that the expected lifetime drops substantially with the increasing number of lions. We also study how the underlying topological structure affects the capture process, and obtain that locating on small-degree nodes is better than large-degree nodes to prolong the lifetime of the lamb. Moreover, dense or homogeneous network structures are against the survival of the lamb.
The composing process in technical communications
NASA Technical Reports Server (NTRS)
Hertz, V. L.
1981-01-01
The theoretical construct under which technical writing exercises operate and results from a survey distributed to a random sample of teachers of technical writing are described. The survey, part of a study to develop materials that did not stress prescriptive formats, drew on diverse elements in report writing to enhance writing as a process. Areas of agreement and disagreement related to problem solving, paper evaluation, and individualizing instruction were surveyed. Areas of concern in contemplating the composition process include: (1) the need to create an environment that helps students want to succeed, (2) the role of peer group activity in helping some students who might not respond through lecture or individual study, and (3) encouraging growth in abilities and helping motivate students' interest in writing projects through relevant assignments or simulations students perceive as relevant.
Hadash, Yuval; Plonsker, Reut; Vago, David R; Bernstein, Amit
2016-07-01
We propose that Experiential Self-Referential Processing (ESRP)-the cognitive association of present moment subjective experience (e.g., sensations, emotions, thoughts) with the self-underlies various forms of maladaptation. We theorize that mindfulness contributes to mental health by engendering Experiential Selfless Processing (ESLP)-processing present moment subjective experience without self-referentiality. To help advance understanding of these processes we aimed to develop an implicit, behavioral measure of ESRP and ESLP of fear, to experimentally validate this measure, and to test the relations between ESRP and ESLP of fear, mindfulness, and key psychobehavioral processes underlying (mal)adaptation. One hundred 38 adults were randomized to 1 of 3 conditions: control, meta-awareness with identification, or meta-awareness with disidentification. We then measured ESRP and ESLP of fear by experimentally eliciting a subjective experience of fear, while concurrently measuring participants' cognitive association between her/himself and fear by means of a Single Category Implicit Association Test; we refer to this measurement as the Single Experience & Self Implicit Association Test (SES-IAT). We found preliminary experimental and correlational evidence suggesting the fear SES-IAT measures ESLP of fear and 2 forms of ESRP- identification with fear and negative self-referential evaluation of fear. Furthermore, we found evidence that ESRP and ESLP are associated with meta-awareness (a core process of mindfulness), as well as key psychobehavioral processes underlying (mal)adaptation. These findings indicate that the cognitive association of self with experience (i.e., ESRP) may be an important substrate of the sense of self, and an important determinant of mental health. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Vila-Castelar, Clara; Ly, Jenny J; Kaplan, Lillian; Van Dyk, Kathleen; Berger, Jeffrey T; Macina, Lucy O; Stewart, Jennifer L; Foldi, Nancy S
2018-04-09
Donepezil is widely used to treat Alzheimer's disease (AD), but detecting early response remains challenging for clinicians. Acetylcholine is known to directly modulate attention, particularly under high cognitive conditions, but no studies to date test whether measures of attention under high load can detect early effects of donepezil. We hypothesized that load-dependent attention tasks are sensitive to short-term treatment effects of donepezil, while global and other domain-specific cognitive measures are not. This longitudinal, randomized, double-blind, placebo-controlled pilot trial (ClinicalTrials.gov Identifier: NCT03073876) evaluated 23 participants newly diagnosed with AD initiating de novo donepezil treatment (5 mg). After baseline assessment, participants were randomized into Drug (n = 12) or Placebo (n = 11) groups, and retested after approximately 6 weeks. Cognitive assessment included: (a) attention tasks (Foreperiod Effect, Attentional Blink, and Covert Orienting tasks) measuring processing speed, top-down accuracy, orienting, intra-individual variability, and fatigue; (b) global measures (Alzheimer's Disease Assessment Scale-Cognitive Subscale, Mini-Mental Status Examination, Dementia Rating Scale); and (c) domain-specific measures (memory, language, visuospatial, and executive function). The Drug but not the Placebo group showed benefits of treatment at high-load measures by preserving top-down accuracy, improving intra-individual variability, and averting fatigue. In contrast, other global or cognitive domain-specific measures could not detect treatment effects over the same treatment interval. The pilot-study suggests that attention measures targeting accuracy, variability, and fatigue under high-load conditions could be sensitive to short-term cholinergic treatment. Given the central role of acetylcholine in attentional function, load-dependent attentional measures may be valuable cognitive markers of early treatment response.
Disorder in the early universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Daniel, E-mail: drgreen@cita.utoronto.ca
2015-03-01
Little is known about the microscopic physics that gave rise to inflation in our universe. There are many reasons to wonder if the underlying description requires a careful arrangement of ingredients or if inflation was the result of an essentially random process. At a technical level, randomness in the microphysics of inflation is closely related to disorder in solids. We develop the formalism of disorder for inflation and investigate the observational consequences of quenched disorder. We find that a common prediction is the presence of additional noise in the power spectrum or bispectrum. At a phenomenological level, these results canmore » be recast in terms of a modulating field, allowing us to write the quadratic maximum likelihood estimator for this noise. Preliminary constraints on disorder can be derived from existing analyses but significant improvements should be possible with a dedicated treatment.« less
Economic lot sizing in a production system with random demand
NASA Astrophysics Data System (ADS)
Lee, Shine-Der; Yang, Chin-Ming; Lan, Shu-Chuan
2016-04-01
An extended economic production quantity model that copes with random demand is developed in this paper. A unique feature of the proposed study is the consideration of transient shortage during the production stage, which has not been explicitly analysed in existing literature. The considered costs include set-up cost for the batch production, inventory carrying cost during the production and depletion stages in one replenishment cycle, and shortage cost when demand cannot be satisfied from the shop floor immediately. Based on renewal reward process, a per-unit-time expected cost model is developed and analysed. Under some mild condition, it can be shown that the approximate cost function is convex. Computational experiments have demonstrated that the average reduction in total cost is significant when the proposed lot sizing policy is compared with those with deterministic demand.
Stahl, Christoph; Barth, Marius; Haider, Hilde
2015-12-01
We investigated potential biases affecting the validity of the process-dissociation (PD) procedure when applied to sequence learning. Participants were or were not exposed to a serial reaction time task (SRTT) with two types of pseudo-random materials. Afterwards, participants worked on a free or cued generation task under inclusion and exclusion instructions. Results showed that pre-experimental response tendencies, non-associative learning of location frequencies, and the usage of cue locations introduced bias to PD estimates. These biases may lead to erroneous conclusions regarding the presence of implicit and explicit knowledge. Potential remedies for these problems are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
The variability of software scoring of the CDMAM phantom associated with a limited number of images
NASA Astrophysics Data System (ADS)
Yang, Chang-Ying J.; Van Metter, Richard
2007-03-01
Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.
Nasejje, Justine B; Mwambi, Henry
2017-09-07
Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the age of five in a household, number of births in the past 5 years, wealth index, total number of children ever born and the child's birth order. The results further indicated that the predictive performance for random survival forests built using covariates including those that violate the PH assumption was higher than that for random survival forests built using only covariates that satisfy the PH assumption. Random survival forests are appealing methods in analysing public health data to understand factors strongly associated with under-five child mortality rates especially in the presence of covariates that violate the proportional hazards assumption.
Coevolutionary dynamics in large, but finite populations
NASA Astrophysics Data System (ADS)
Traulsen, Arne; Claussen, Jens Christian; Hauert, Christoph
2006-07-01
Coevolving and competing species or game-theoretic strategies exhibit rich and complex dynamics for which a general theoretical framework based on finite populations is still lacking. Recently, an explicit mean-field description in the form of a Fokker-Planck equation was derived for frequency-dependent selection with two strategies in finite populations based on microscopic processes [A. Traulsen, J. C. Claussen, and C. Hauert, Phys. Rev. Lett. 95, 238701 (2005)]. Here we generalize this approach in a twofold way: First, we extend the framework to an arbitrary number of strategies and second, we allow for mutations in the evolutionary process. The deterministic limit of infinite population size of the frequency-dependent Moran process yields the adjusted replicator-mutator equation, which describes the combined effect of selection and mutation. For finite populations, we provide an extension taking random drift into account. In the limit of neutral selection, i.e., whenever the process is determined by random drift and mutations, the stationary strategy distribution is derived. This distribution forms the background for the coevolutionary process. In particular, a critical mutation rate uc is obtained separating two scenarios: above uc the population predominantly consists of a mixture of strategies whereas below uc the population tends to be in homogeneous states. For one of the fundamental problems in evolutionary biology, the evolution of cooperation under Darwinian selection, we demonstrate that the analytical framework provides excellent approximations to individual based simulations even for rather small population sizes. This approach complements simulation results and provides a deeper, systematic understanding of coevolutionary dynamics.
Liu, Xiang; Chen, Fei; Lyu, Shengman; Sun, Dexin; Zhou, Shurong
2018-02-01
With increasing attention being paid to the consequences of global biodiversity losses, several recent studies have demonstrated that realistic species losses can have larger impacts than random species losses on community productivity and resilience. However, little is known about the effects of the order in which species are lost on biodiversity-disease relationships. Using a multiyear nitrogen addition and artificial warming experiment in natural assemblages of alpine meadow vegetation on the Qinghai-Tibetan Plateau, we inferred the sequence of plant species losses under fertilization/warming. Then the sequence of species losses under fertilization/warming was used to simulate the species loss orders (both realistic and random) in an adjacently novel removal experiment manipulating plot-level plant diversity. We explicitly compared the effect sizes of random versus realistic species losses simulated from fertilization/warming on plant foliar fungal diseases. We found that realistic species losses simulated from fertilization had greater effects than random losses on fungal diseases, and that species identity drove the diversity-disease relationship. Moreover, the plant species most prone to foliar fungal diseases were also the least vulnerable to extinction under fertilization, demonstrating the importance of protecting low competence species (the ability to maintain and transmit fungal infections was low) to impede the spread of infectious disease. In contrast, there was no difference between random and realistic species loss scenarios simulated from experimental warming (or the combination of warming and fertilization) on the diversity-disease relationship, indicating that the functional consequences of species losses may vary under different drivers.
NASA Astrophysics Data System (ADS)
Matía, Isabel; van Loon, Jack W. A.; Carnero-Díaz, Eugénie; Marco, Roberto; Medina, Francisco Javier
2009-01-01
The study of the modifications induced by altered gravity in functions of plant cells is a valuable tool for the objective of the survival of terrestrial organisms in conditions different from those of the Earth. We have used the system "cell proliferation-ribosome biogenesis", two inter-related essential cellular processes, with the purpose of studying these modifications. Arabidopsis seedlings belonging to a transformed line containing the reporter gene GUS under the control of the promoter of the cyclin gene CYCB1, a cell cycle regulator, were grown in a Random Positioning Machine, a device known to accurately simulate microgravity. Samples were taken at 2, 4 and 8 days after germination and subjected to biometrical analysis and cellular morphometrical, ultrastructural and immunocytochemical studies in order to know the rates of cell proliferation and ribosome biogenesis, plus the estimation of the expression of the cyclin gene, as an indication of the state of cell cycle regulation. Our results show that cells divide more in simulated microgravity in a Random Positioning Machine than in control gravity, but the cell cycle appears significantly altered as early as 2 days after germination. Furthermore, higher proliferation is not accompanied by an increase in ribosome synthesis, as is the rule on Earth, but the functional markers of this process appear depleted in simulated microgravity-grown samples. Therefore, the alteration of the gravitational environmental conditions results in a considerable stress for plant cells, including those not specialized in gravity perception.
NASA Astrophysics Data System (ADS)
Hu, Zhaosheng; Ma, Tingli; Hayase, Shuzi
2018-01-01
Thin perovskite solar cells are under intensive interest since they reduce the amount of absorber layer, especially toxic lead in methylammonium lead iodide (MAPbI3) devices and have wide application in semitransparent and tandem solar cells. However, due to the decrease of the layer thickness, thin perovskite devices with weak light-harvesting have poor performance. Moreover, the performance of plasmonic thin perovskite devices by incorporating noncoupling metal NPs cannot give comparable performance with normal devices. In this perspective, we discuss the implication of employing random silver-gold heterodimers in MAPbI3 solar cells with the aim of establishing some guidelines for the efficient ultrathin perovskite solar cells. This method induces an extraordinarily high light-harvesting for ultrathin perovskite film. And the underlying physical mechanism behind the enhanced absorption is deeply investigated by plasmon hybridization, dipolar-dipolar coupling method and FDTD simulation. We notice that perovskite embedded silver-gold heterodimer overcomes the vanished antibonding plasmon resononse (σ * ) in nonjunction area of gold/silver homodimer. A 150-nm perovskite film with embedded random silver-gold heterodimers with 80 nm size and 25 nm gap distance processes 28.15% absorption enhancement compared to the reference film, which is higher than the reported 10% for gold homodimers. And we also predict a realistic solution-processed, easy, and low-cost fabrication method, which provide a means to realize highly efficient ultrathin perovskite solar cell including other absorber-based photovoltaics.
NASA Astrophysics Data System (ADS)
Seo, Jongmin; Mani, Ali
2018-04-01
Superhydrophobic surfaces demonstrate promising potential for skin friction reduction in naval and hydrodynamic applications. Recent developments of superhydrophobic surfaces aiming for scalable applications use random distribution of roughness, such as spray coating and etched process. However, most previous analyses of the interaction between flows and superhydrophobic surfaces studied periodic geometries that are economically feasible only in laboratory-scale experiments. In order to assess the drag reduction effectiveness as well as interfacial robustness of superhydrophobic surfaces with randomly distributed textures, we conduct direct numerical simulations of turbulent flows over randomly patterned interfaces considering a range of texture widths w+≈4 -26 , and solid fractions ϕs=11 %-25 % . Slip and no-slip boundary conditions are implemented in a pattern, modeling the presence of gas-liquid interfaces and solid elements. Our results indicate that slip of randomly distributed textures under turbulent flows is about 30 % less than those of surfaces with aligned features of the same size. In the small texture size limit w+≈4 , the slip length of the randomly distributed textures in turbulent flows is well described by a previously introduced Stokes flow solution of randomly distributed shear-free holes. By comparing DNS results for patterned slip and no-slip boundary against the corresponding homogenized slip length boundary conditions, we show that turbulent flows over randomly distributed posts can be represented by an isotropic slip length in streamwise and spanwise direction. The average pressure fluctuation on a gas pocket is similar to that of the aligned features with the same texture size and gas fraction, but the maximum interface deformation at the leading edge of the roughness element is about twice as large when the textures are randomly distributed. The presented analyses provide insights on implications of texture randomness on drag reduction performance and robustness of superhydrophobic surfaces.
Chai, Yongfu; Yue, Ming; Liu, Xiao; Guo, Yaoxin; Wang, Mao; Xu, Jinshi; Zhang, Chenguang; Chen, Yu; Zhang, Lixia; Zhang, Ruichang
2016-01-01
Quantifying the drivers underlying the distribution of biodiversity during succession is a critical issue in ecology and conservation, and also can provide insights into the mechanisms of community assembly. Ninety plots were established in the Loess Plateau region of northern Shaanxi in China. The taxonomic and phylogenetic (alpha and beta) diversity were quantified within six succession stages. Null models were used to test whether phylogenetic distance observed differed from random expectations. Taxonomic beta diversity did not show a regular pattern, while phylogenetic beta diversity decreased throughout succession. The shrub stage occurred as a transition from phylogenetic overdispersion to clustering either for NRI (Net Relatedness Index) or betaNRI. The betaNTI (Nearest Taxon Index) values for early stages were on average phylogenetically random, but for the betaNRI analyses, these stages were phylogenetically overdispersed. Assembly of woody plants differed from that of herbaceous plants during late community succession. We suggest that deterministic and stochastic processes respectively play a role in different aspects of community phylogenetic structure for early succession stage, and that community composition of late succession stage is governed by a deterministic process. In conclusion, the long-lasting evolutionary imprints on the present-day composition of communities arrayed along the succession gradient. PMID:27272407
Chai, Yongfu; Yue, Ming; Liu, Xiao; Guo, Yaoxin; Wang, Mao; Xu, Jinshi; Zhang, Chenguang; Chen, Yu; Zhang, Lixia; Zhang, Ruichang
2016-06-08
Quantifying the drivers underlying the distribution of biodiversity during succession is a critical issue in ecology and conservation, and also can provide insights into the mechanisms of community assembly. Ninety plots were established in the Loess Plateau region of northern Shaanxi in China. The taxonomic and phylogenetic (alpha and beta) diversity were quantified within six succession stages. Null models were used to test whether phylogenetic distance observed differed from random expectations. Taxonomic beta diversity did not show a regular pattern, while phylogenetic beta diversity decreased throughout succession. The shrub stage occurred as a transition from phylogenetic overdispersion to clustering either for NRI (Net Relatedness Index) or betaNRI. The betaNTI (Nearest Taxon Index) values for early stages were on average phylogenetically random, but for the betaNRI analyses, these stages were phylogenetically overdispersed. Assembly of woody plants differed from that of herbaceous plants during late community succession. We suggest that deterministic and stochastic processes respectively play a role in different aspects of community phylogenetic structure for early succession stage, and that community composition of late succession stage is governed by a deterministic process. In conclusion, the long-lasting evolutionary imprints on the present-day composition of communities arrayed along the succession gradient.
NASA Astrophysics Data System (ADS)
Wang, Lianfeng; Yan, Biao; Guo, Lijie; Gu, Dongdong
2018-04-01
A newly transient mesoscopic model with a randomly packed powder-bed has been proposed to investigate the heat and mass transfer and laser process quality between neighboring tracks during selective laser melting (SLM) AlSi12 alloy by finite volume method (FVM), considering the solid/liquid phase transition, variable temperature-dependent properties and interfacial force. The results apparently revealed that both the operating temperature and resultant cooling rate were obviously elevated by increasing the laser power. Accordingly, the resultant viscosity of liquid significantly reduced under a large laser power and was characterized with a large velocity, which was prone to result in a more intensive convection within pool. In this case, the sufficient heat and mass transfer occurred at the interface between the previously fabricated tracks and currently building track, revealing a strongly sufficient spreading between the neighboring tracks and a resultant high-quality surface without obvious porosity. By contrast, the surface quality of SLM-processed components with a relatively low laser power notably weakened due to the limited and insufficient heat and mass transfer at the interface of neighboring tracks. Furthermore, the experimental surface morphologies of the top surface were correspondingly acquired and were in full accordance to the calculated results via simulation.
Slosky, Laura E.; Burke, Natasha L.; Siminoff, Laura A.
2014-01-01
Background. In stressful situations, decision making processes related to informed consent may be compromised. Given the profound levels of distress that surrogates of children in pediatric intensive care units (PICU) experience, it is important to understand what factors may be influencing the decision making process beyond the informed consent. The purpose of this study was to evaluate the role of clinician influence and other factors on decision making regarding participation in a randomized clinical trial (RCT). Method. Participants were 76 children under sedation in a PICU and their surrogate decision makers. Measures included the Post Decision Clinician Survey, observer checklist, and post-decision interview. Results. Age of the pediatric patient was related to participation decisions in the RCT such that older children were more likely to be enrolled. Mentioning the sponsoring institution was associated with declining to participate in the RCT. Type of health care provider and overt recommendations to participate were not related to enrollment. Conclusion. Decisions to participate in research by surrogates of children in the PICU appear to relate to child demographics and subtleties in communication; however, no modifiable characteristics were related to increased participation, indicating that the informed consent process may not be compromised in this population. PMID:25161672
Uncertainty in Random Forests: What does it mean in a spatial context?
NASA Astrophysics Data System (ADS)
Klump, Jens; Fouedjio, Francky
2017-04-01
Geochemical surveys are an important part of exploration for mineral resources and in environmental studies. The samples and chemical analyses are often laborious and difficult to obtain and therefore come at a high cost. As a consequence, these surveys are characterised by datasets with large numbers of variables but relatively few data points when compared to conventional big data problems. With more remote sensing platforms and sensor networks being deployed, large volumes of auxiliary data of the surveyed areas are becoming available. The use of these auxiliary data has the potential to improve the prediction of chemical element concentrations over the whole study area. Kriging is a well established geostatistical method for the prediction of spatial data but requires significant pre-processing and makes some basic assumptions about the underlying distribution of the data. Some machine learning algorithms, on the other hand, may require less data pre-processing and are non-parametric. In this study we used a dataset provided by Kirkwood et al. [1] to explore the potential use of Random Forest in geochemical mapping. We chose Random Forest because it is a well understood machine learning method and has the advantage that it provides us with a measure of uncertainty. By comparing Random Forest to Kriging we found that both methods produced comparable maps of estimated values for our variables of interest. Kriging outperformed Random Forest for variables of interest with relatively strong spatial correlation. The measure of uncertainty provided by Random Forest seems to be quite different to the measure of uncertainty provided by Kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. In conclusion, our preliminary results show that the model driven approach in geostatistics gives us more reliable estimates for our target variables than Random Forest for variables with relatively strong spatial correlation. However, in cases of weak spatial correlation Random Forest, as a nonparametric method, may give the better results once we have a better understanding of the meaning of its uncertainty measures in a spatial context. References [1] Kirkwood, C., M. Cave, D. Beamish, S. Grebby, and A. Ferreira (2016), A machine learning approach to geochemical mapping, Journal of Geochemical Exploration, 163, 28-40, doi:10.1016/j.gexplo.2016.05.003.
Atomic clocks and the continuous-time random-walk
NASA Astrophysics Data System (ADS)
Formichella, Valerio; Camparo, James; Tavella, Patrizia
2017-11-01
Atomic clocks play a fundamental role in many fields, most notably they generate Universal Coordinated Time and are at the heart of all global navigation satellite systems. Notwithstanding their excellent timekeeping performance, their output frequency does vary: it can display deterministic frequency drift; diverse continuous noise processes result in nonstationary clock noise (e.g., random-walk frequency noise, modelled as a Wiener process), and the clock frequency may display sudden changes (i.e., "jumps"). Typically, the clock's frequency instability is evaluated by the Allan or Hadamard variances, whose functional forms can identify the different operative noise processes. Here, we show that the Allan and Hadamard variances of a particular continuous-time random-walk, the compound Poisson process, have the same functional form as for a Wiener process with drift. The compound Poisson process, introduced as a model for observed frequency jumps, is an alternative to the Wiener process for modelling random walk frequency noise. This alternate model fits well the behavior of the rubidium clocks flying on GPS Block-IIR satellites. Further, starting from jump statistics, the model can be improved by considering a more general form of continuous-time random-walk, and this could bring new insights into the physics of atomic clocks.
Random covering of the circle: the configuration-space of the free deposition process
NASA Astrophysics Data System (ADS)
Huillet, Thierry
2003-12-01
Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.
Foldamer hypothesis for the growth and sequence differentiation of prebiotic polymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guseva, Elizaveta; Zuckermann, Ronald N.; Dill, Ken A.
It is not known how life originated. It is thought that prebiotic processes were able to synthesize short random polymers. However, then, how do short-chain molecules spontaneously grow longer? Also, how would random chains grow more informational and become autocatalytic (i.e., increasing their own concentrations)? We study the folding and binding of random sequences of hydrophobic ( H) and polar ( P) monomers in a computational model. We find that even short hydrophobic polar ( HP) chains can collapse into relatively compact structures, exposing hydrophobic surfaces. In this way, they act as primitive versions of today’s protein catalysts, elongating othermore » such HP polymers as ribosomes would now do. Such foldamer catalysts are shown to form an autocatalytic set, through which short chains grow into longer chains that have particular sequences. An attractive feature of this model is that it does not overconverge to a single solution; it gives ensembles that could further evolve under selection. This mechanism describes how specific sequences and conformations could contribute to the chemistry-to-biology (CTB) transition.« less
Long-term strength and damage accumulation in laminates
NASA Astrophysics Data System (ADS)
Dzenis, Yuris A.; Joshi, Shiv P.
1993-04-01
A modified version of the probabilistic model developed by authors for damage evolution analysis of laminates subjected to random loading is utilized to predict long-term strength of laminates. The model assumes that each ply in a laminate consists of a large number of mesovolumes. Probabilistic variation functions for mesovolumes stiffnesses as well as strengths are used in the analysis. Stochastic strains are calculated using the lamination theory and random function theory. Deterioration of ply stiffnesses is calculated on the basis of the probabilities of mesovolumes failures using the theory of excursions of random process beyond the limits. Long-term strength and damage accumulation in a Kevlar/epoxy laminate under tension and complex in-plane loading are investigated. Effects of the mean level and stochastic deviation of loading on damage evolution and time-to-failure of laminate are discussed. Long-term cumulative damage at the time of the final failure at low loading levels is more than at high loading levels. The effect of the deviation in loading is more pronounced at lower mean loading levels.
Foldamer hypothesis for the growth and sequence differentiation of prebiotic polymers
Guseva, Elizaveta; Zuckermann, Ronald N.; Dill, Ken A.
2017-08-22
It is not known how life originated. It is thought that prebiotic processes were able to synthesize short random polymers. However, then, how do short-chain molecules spontaneously grow longer? Also, how would random chains grow more informational and become autocatalytic (i.e., increasing their own concentrations)? We study the folding and binding of random sequences of hydrophobic ( H) and polar ( P) monomers in a computational model. We find that even short hydrophobic polar ( HP) chains can collapse into relatively compact structures, exposing hydrophobic surfaces. In this way, they act as primitive versions of today’s protein catalysts, elongating othermore » such HP polymers as ribosomes would now do. Such foldamer catalysts are shown to form an autocatalytic set, through which short chains grow into longer chains that have particular sequences. An attractive feature of this model is that it does not overconverge to a single solution; it gives ensembles that could further evolve under selection. This mechanism describes how specific sequences and conformations could contribute to the chemistry-to-biology (CTB) transition.« less
Foldamer hypothesis for the growth and sequence differentiation of prebiotic polymers
Guseva, Elizaveta; Zuckermann, Ronald N.; Dill, Ken A.
2017-01-01
It is not known how life originated. It is thought that prebiotic processes were able to synthesize short random polymers. However, then, how do short-chain molecules spontaneously grow longer? Also, how would random chains grow more informational and become autocatalytic (i.e., increasing their own concentrations)? We study the folding and binding of random sequences of hydrophobic (H) and polar (P) monomers in a computational model. We find that even short hydrophobic polar (HP) chains can collapse into relatively compact structures, exposing hydrophobic surfaces. In this way, they act as primitive versions of today’s protein catalysts, elongating other such HP polymers as ribosomes would now do. Such foldamer catalysts are shown to form an autocatalytic set, through which short chains grow into longer chains that have particular sequences. An attractive feature of this model is that it does not overconverge to a single solution; it gives ensembles that could further evolve under selection. This mechanism describes how specific sequences and conformations could contribute to the chemistry-to-biology (CTB) transition. PMID:28831002
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au
We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less
Group Matching: Is This a Research Technique to Be Avoided?
ERIC Educational Resources Information Center
Ross, Donald C.; Klein, Donald F.
1988-01-01
The variance of the sample difference and the power of the "F" test for mean differences were studied under group matching on covariates and also under random assignment. Results shed light on systematic assignment procedures advocated to provide more precise estimates of treatment effects than simple random assignment. (TJH)
NASA Astrophysics Data System (ADS)
Adamczyk, Krzysztof; Søndenâ, Rune; Stokkan, Gaute; Looney, Erin; Jensen, Mallory; Lai, Barry; Rinio, Markus; Di Sabatino, Marisa
2018-02-01
In this work, we applied internal quantum efficiency mapping to study the recombination activity of grain boundaries in High Performance Multicrystalline Silicon under different processing conditions. Wafers were divided into groups and underwent different thermal processing, consisting of phosphorus diffusion gettering and surface passivation with hydrogen rich layers. After these thermal treatments, wafers were processed into heterojunction with intrinsic thin layer solar cells. Light Beam Induced Current and Electron Backscatter Diffraction were applied to analyse the influence of thermal treatment during standard solar cell processing on different types of grain boundaries. The results show that after cell processing, most random-angle grain boundaries in the material are well passivated, but small-angle grain boundaries are not well passivated. Special cases of coincidence site lattice grain boundaries with high recombination activity are also found. Based on micro-X-ray fluorescence measurements, a change in the contamination level is suggested as the reason behind their increased activity.
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
EDITORIAL: Special section on foliage penetration
NASA Astrophysics Data System (ADS)
Fiddy, M. A.; Lang, R.; McGahan, R. V.
2004-04-01
Waves in Random Media was founded in 1991 to provide a forum for papers dealing with electromagnetic and acoustic waves as they propagate and scatter through media or objects having some degree of randomness. This is a broad charter since, in practice, all scattering obstacles and structures have roughness or randomness, often on the scale of the wavelength being used to probe them. Including this random component leads to some quite different methods for describing propagation effects, for example, when propagating through the atmosphere or the ground. This special section on foliage penetration (FOPEN) focuses on the problems arising from microwave propagation through foliage and vegetation. Applications of such studies include the estimation for forest biomass and the moisture of the underlying soil, as well as detecting objects hidden therein. In addition to the so-called `direct problem' of trying to describe energy propagating through such media, the complementary inverse problem is of great interest and much harder to solve. The development of theoretical models and associated numerical algorithms for identifying objects concealed by foliage has applications in surveillance, ranging from monitoring drug trafficking to targeting military vehicles. FOPEN can be employed to map the earth's surface in cases when it is under a forest canopy, permitting the identification of objects or targets on that surface, but the process for doing so is not straightforward. There has been an increasing interest in foliage penetration synthetic aperture radar (FOPEN or FOPENSAR) over the last 10 years and this special section provides a broad overview of many of the issues involved. The detection, identification, and geographical location of targets under foliage or otherwise obscured by poor visibility conditions remains a challenge. In particular, a trade-off often needs to be appreciated, namely that diminishing the deleterious effects of multiple scattering from leaves is typically associated with a significant loss in target resolution. Foliage is more or less transparent to some radar frequencies, but longer wavelengths found in the VHF (30 to 300 MHz) and UHF (300 MHz to 3 GHz) portions of the microwave spectrum have more chance of penetrating foliage than do wavelengths at the X band (8 to 12 GHz). Reflection and multiple scattering occur for some other frequencies and models of the processes involved are crucial. Two topical reviews can be found in this issue, one on the microwave radiometry of forests (page S275) and another describing ionospheric effects on space-based radar (page S189). Subsequent papers present new results on modelling coherent backscatter from forests (page S299), modelling forests as discrete random media over a random interface (page S359) and interpreting ranging scatterometer data from forests (page S317). Cloude et al present research on identifying targets beneath foliage using polarimetric SAR interferometry (page S393) while Treuhaft and Siqueira use interferometric radar to describe forest structure and biomass (page S345). Vechhia et al model scattering from leaves (page S333) and Semichaevsky et al address the problem of the trade-off between increasing wavelength, reduction in multiple scattering, and target resolution (page S415).
NASA Technical Reports Server (NTRS)
Richards, Hadley T.
1954-01-01
A turbine blade with a porous stainless-steel shell sintered to a supporting steel strut has been fabricated for tests at the NACA by Federal-Mogul Corporation under contract from the Bureau of Aeronautics, Department of the Navy. The apparent permeability of this blade, on the average, more nearly approaches the values specified by the NAGA than did two strut-supported bronze blades in a previous investigation. Random variations of permeability in the present blade are substantialy greater than those of the bronze blades, but projected improvements in certain phases of the fabrication process are expected to reduce these variations.
An Integrated Approach to Parameter Learning in Infinite-Dimensional Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, Zachary M.; Wendelberger, Joanne Roth
The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less
Applying Active Learning to Assertion Classification of Concepts in Clinical Text
Chen, Yukun; Mani, Subramani; Xu, Hua
2012-01-01
Supervised machine learning methods for clinical natural language processing (NLP) research require a large number of annotated samples, which are very expensive to build because of the involvement of physicians. Active learning, an approach that actively samples from a large pool, provides an alternative solution. Its major goal in classification is to reduce the annotation effort while maintaining the quality of the predictive model. However, few studies have investigated its uses in clinical NLP. This paper reports an application of active learning to a clinical text classification task: to determine the assertion status of clinical concepts. The annotated corpus for the assertion classification task in the 2010 i2b2/VA Clinical NLP Challenge was used in this study. We implemented several existing and newly developed active learning algorithms and assessed their uses. The outcome is reported in the global ALC score, based on the Area under the average Learning Curve of the AUC (Area Under the Curve) score. Results showed that when the same number of annotated samples was used, active learning strategies could generate better classification models (best ALC – 0.7715) than the passive learning method (random sampling) (ALC – 0.7411). Moreover, to achieve the same classification performance, active learning strategies required fewer samples than the random sampling method. For example, to achieve an AUC of 0.79, the random sampling method used 32 samples, while our best active learning algorithm required only 12 samples, a reduction of 62.5% in manual annotation effort. PMID:22127105
NASA Astrophysics Data System (ADS)
Tsukanov, A. A.; Gorbatnikov, A. V.
2018-01-01
Study of the statistical parameters of the Earth's random microseismic field makes it possible to obtain estimates of the properties and structure of the Earth's crust and upper mantle. Different approaches are used to observe and process the microseismic records, which are divided into several groups of passive seismology methods. Among them are the well-known methods of surface-wave tomography, the spectral H/ V ratio of the components in the surface wave, and microseismic sounding, currently under development, which uses the spectral ratio V/ V 0 of the vertical components between pairs of spatially separated stations. In the course of previous experiments, it became clear that these ratios are stable statistical parameters of the random field that do not depend on the properties of microseism sources. This paper proposes to expand the mentioned approach and study the possibilities for using the ratio of the horizontal components H 1/ H 2 of the microseismic field. Numerical simulation was used to study the influence of an embedded velocity inhomogeneity on the spectral ratio of the horizontal components of the random field of fundamental Rayleigh modes, based on the concept that the Earth's microseismic field is represented by these waves in a significant part of the frequency spectrum.
Hindersin, Laura; Traulsen, Arne
2015-11-01
We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process.
Attributing intentions to random motion engages the posterior superior temporal sulcus.
Lee, Su Mei; Gao, Tao; McCarthy, Gregory
2014-01-01
The right posterior superior temporal sulcus (pSTS) is a neural region involved in assessing the goals and intentions underlying the motion of social agents. Recent research has identified visual cues, such as chasing, that trigger animacy detection and intention attribution. When readily available in a visual display, these cues reliably activate the pSTS. Here, using functional magnetic resonance imaging, we examined if attributing intentions to random motion would likewise engage the pSTS. Participants viewed displays of four moving circles and were instructed to search for chasing or mirror-correlated motion. On chasing trials, one circle chased another circle, invoking the percept of an intentional agent; while on correlated motion trials, one circle's motion was mirror reflected by another. On the remaining trials, all circles moved randomly. As expected, pSTS activation was greater when participants searched for chasing vs correlated motion when these cues were present in the displays. Of critical importance, pSTS activation was also greater when participants searched for chasing compared to mirror-correlated motion when the displays in both search conditions were statistically identical random motion. We conclude that pSTS activity associated with intention attribution can be invoked by top-down processes in the absence of reliable visual cues for intentionality.
Pottage, Claire L; Schaefer, Alexandre
2012-02-01
The emotional enhancement of memory is often thought to be determined by attention. However, recent evidence using divided attention paradigms suggests that attention does not play a significant role in the formation of memories for aversive pictures. We report a study that investigated this question using a paradigm in which participants had to encode lists of randomly intermixed negative and neutral pictures under conditions of full attention and divided attention followed by a free recall test. Attention was divided by a highly demanding concurrent task tapping visual processing resources. Results showed that the advantage in recall for aversive pictures was still present in the DA condition. However, mediation analyses also revealed that concurrent task performance significantly mediated the emotional enhancement of memory under divided attention. This finding suggests that visual attentional processes play a significant role in the formation of emotional memories. PsycINFO Database Record (c) 2012 APA, all rights reserved
Contagion on complex networks with persuasion
NASA Astrophysics Data System (ADS)
Huang, Wei-Min; Zhang, Li-Jie; Xu, Xin-Jian; Fu, Xinchu
2016-03-01
The threshold model has been widely adopted as a classic model for studying contagion processes on social networks. We consider asymmetric individual interactions in social networks and introduce a persuasion mechanism into the threshold model. Specifically, we study a combination of adoption and persuasion in cascading processes on complex networks. It is found that with the introduction of the persuasion mechanism, the system may become more vulnerable to global cascades, and the effects of persuasion tend to be more significant in heterogeneous networks than those in homogeneous networks: a comparison between heterogeneous and homogeneous networks shows that under weak persuasion, heterogeneous networks tend to be more robust against random shocks than homogeneous networks; whereas under strong persuasion, homogeneous networks are more stable. Finally, we study the effects of adoption and persuasion threshold heterogeneity on systemic stability. Though both heterogeneities give rise to global cascades, the adoption heterogeneity has an overwhelmingly stronger impact than the persuasion heterogeneity when the network connectivity is sufficiently dense.
Contagion on complex networks with persuasion
Huang, Wei-Min; Zhang, Li-Jie; Xu, Xin-Jian; Fu, Xinchu
2016-01-01
The threshold model has been widely adopted as a classic model for studying contagion processes on social networks. We consider asymmetric individual interactions in social networks and introduce a persuasion mechanism into the threshold model. Specifically, we study a combination of adoption and persuasion in cascading processes on complex networks. It is found that with the introduction of the persuasion mechanism, the system may become more vulnerable to global cascades, and the effects of persuasion tend to be more significant in heterogeneous networks than those in homogeneous networks: a comparison between heterogeneous and homogeneous networks shows that under weak persuasion, heterogeneous networks tend to be more robust against random shocks than homogeneous networks; whereas under strong persuasion, homogeneous networks are more stable. Finally, we study the effects of adoption and persuasion threshold heterogeneity on systemic stability. Though both heterogeneities give rise to global cascades, the adoption heterogeneity has an overwhelmingly stronger impact than the persuasion heterogeneity when the network connectivity is sufficiently dense. PMID:27029498
Contagion on complex networks with persuasion.
Huang, Wei-Min; Zhang, Li-Jie; Xu, Xin-Jian; Fu, Xinchu
2016-03-31
The threshold model has been widely adopted as a classic model for studying contagion processes on social networks. We consider asymmetric individual interactions in social networks and introduce a persuasion mechanism into the threshold model. Specifically, we study a combination of adoption and persuasion in cascading processes on complex networks. It is found that with the introduction of the persuasion mechanism, the system may become more vulnerable to global cascades, and the effects of persuasion tend to be more significant in heterogeneous networks than those in homogeneous networks: a comparison between heterogeneous and homogeneous networks shows that under weak persuasion, heterogeneous networks tend to be more robust against random shocks than homogeneous networks; whereas under strong persuasion, homogeneous networks are more stable. Finally, we study the effects of adoption and persuasion threshold heterogeneity on systemic stability. Though both heterogeneities give rise to global cascades, the adoption heterogeneity has an overwhelmingly stronger impact than the persuasion heterogeneity when the network connectivity is sufficiently dense.
Transition to turbulence under low-pressure turbine conditions.
Simon, T W; Kaszeta, R W
2001-05-01
In this paper, the topic of laminar to turbulent flow transition, as applied to the design of gas turbines, is discussed. Transition comes about when a flow becomes sufficiently unstable that the orderly vorticity structure of the laminar layer becomes randomly oriented. Vorticity with a streamwise component leads to rapid growth of eddies of a wide range of sizes and eventually to turbulent flow. Under "natural" transition, infinitesimal disturbances of selected frequencies grow. "Bypass transition" is a term coined to describe a similar process, but one driven by strong external disturbances. Transition proceeds so rapidly that the processes associated with "natural" transition seem to be "bypassed." Because the flow environment in the turbine is disturbed by wakes from upstream airfoils, eddies from combustor flows, jets from film cooling, separation zones on upstream airfoils and steps in the duct walls, transition is of the bypass mode. In this paper, we discuss work that has been done to characterize and model bypass transition, as applied to the turbine environment.
Boitard, Simon; Loisel, Patrice
2007-05-01
The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.
Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko
2012-01-01
To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.
Recognition and processing of randomly fluctuating electric signals by Na,K-ATPase.
Xie, T. D.; Marszalek, P.; Chen, Y. D.; Tsong, T. Y.
1994-01-01
Previous work has shown that Na,K-ATPase of human erythrocytes can extract free energy from sinusoidal electric fields to pump cations up their respective concentration gradients. Because regularly oscillating waveform is not a feature of the transmembrane electric potential of cells, questions have been raised whether these observed effects are biologically relevant. Here we show that a random-telegraph fluctuating electric field (RTF) consisting of alternating square electric pulses with random lifetimes can also stimulate the Rb(+)-pumping mode of the Na,K-ATPase. The net RTF-stimulated, ouabain-sensitive Rb+ pumping was monitored with 86Rb+. The tracer-measured, Rb+ influx exhibited frequency and amplitude dependencies that peaked at the mean frequency of 1.0 kHz and amplitude of 20 V/cm. At 4 degrees C, the maximal pumping activity under these optimal conditions was 28 Rb+/RBC-hr, which is approximately 50% higher than that obtained with the sinusoidal electric field. These findings indicate that Na,K-ATPase can recognize an electric signal, either regularly oscillatory or randomly fluctuating, for energy coupling, with high fidelity. The use of RTF for activation also allowed a quantitative theoretical analysis of kinetics of a membrane transport model of any complexity according to the theory of electroconformational coupling (ECC) by the diagram methods. A four-state ECC model was shown to produce the amplitude and the frequency windows of the Rb(+)-pumping if the free energy of interaction of the transporter with the membrane potential was to include a nonlinear quadratic term. Kinetic constants for the ECC model have been derived. These results indicate that the ECC is a plausible mechanism for the recognition and processing of electric signals by proteins of the cell membrane. PMID:7811939
Assessment of applications of transport models on regional scale solute transport
NASA Astrophysics Data System (ADS)
Guo, Z.; Fogg, G. E.; Henri, C.; Pauloo, R.
2017-12-01
Regional scale transport models are needed to support the long-term evaluation of groundwater quality and to develop management strategies aiming to prevent serious groundwater degradation. The purpose of this study is to evaluate the capacity of previously-developed upscaling approaches to accurately describe main solute transport processes including the capture of late-time tails under changing boundary conditions. Advective-dispersive contaminant transport in a 3D heterogeneous domain was simulated and used as a reference solution. Equivalent transport under homogeneous flow conditions were then evaluated applying the Multi-Rate Mass Transfer (MRMT) model. The random walk particle tracking method was used for both heterogeneous and homogeneous-MRMT scenarios under steady state and transient conditions. The results indicate that the MRMT model can capture the tails satisfactorily for plume transported with ambient steady-state flow field. However, when boundary conditions change, the mass transfer model calibrated for transport under steady-state conditions cannot accurately reproduce the tailing effect observed for the heterogeneous scenario. The deteriorating impact of transient boundary conditions on the upscaled model is more significant for regions where flow fields are dramatically affected, highlighting the poor applicability of the MRMT approach for complex field settings. Accurately simulating mass in both mobile and immobile zones is critical to represent the transport process under transient flow conditions and will be the future focus of our study.
Mean first-passage times of non-Markovian random walkers in confinement.
Guérin, T; Levernier, N; Bénichou, O; Voituriez, R
2016-06-16
The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.
Mean first-passage times of non-Markovian random walkers in confinement
NASA Astrophysics Data System (ADS)
Guérin, T.; Levernier, N.; Bénichou, O.; Voituriez, R.
2016-06-01
The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.
Epidemic Percolation Networks, Epidemic Outcomes, and Interventions
Kenah, Eben; Miller, Joel C.
2011-01-01
Epidemic percolation networks (EPNs) are directed random networks that can be used to analyze stochastic “Susceptible-Infectious-Removed” (SIR) and “Susceptible-Exposed-Infectious-Removed” (SEIR) epidemic models, unifying and generalizing previous uses of networks and branching processes to analyze mass-action and network-based S(E)IR models. This paper explains the fundamental concepts underlying the definition and use of EPNs, using them to build intuition about the final outcomes of epidemics. We then show how EPNs provide a novel and useful perspective on the design of vaccination strategies.
Investigation of dynamic noise affecting geodynamics information in a tethered subsatellite
NASA Technical Reports Server (NTRS)
Gullahorn, G. E.
1985-01-01
Work performed as part of an investigation of noise affecting instrumentation in a tethered subsatellite, was studied. The following specific topics were addressed during the reporting period: a method for stabilizing the subsatellite against the rotational effects of atmospheric perturbation was developed; a variety of analytic studies of tether dynamics aimed at elucidating dynamic noise processes were performed; a novel mechanism for coupling longitudinal and latitudinal oscillations of the tether was discovered, and random vibration analysis for modeling the tethered subsatellite under atmospheric perturbation were studied.
Quantitative Boltzmann-Gibbs Principles via Orthogonal Polynomial Duality
NASA Astrophysics Data System (ADS)
Ayala, Mario; Carinci, Gioia; Redig, Frank
2018-06-01
We study fluctuation fields of orthogonal polynomials in the context of particle systems with duality. We thereby obtain a systematic orthogonal decomposition of the fluctuation fields of local functions, where the order of every term can be quantified. This implies a quantitative generalization of the Boltzmann-Gibbs principle. In the context of independent random walkers, we complete this program, including also fluctuation fields in non-stationary context (local equilibrium). For other interacting particle systems with duality such as the symmetric exclusion process, similar results can be obtained, under precise conditions on the n particle dynamics.
Epidemic Percolation Networks, Epidemic Outcomes, and Interventions
Kenah, Eben; Miller, Joel C.
2011-01-01
Epidemic percolation networks (EPNs) are directed random networks that can be used to analyze stochastic “Susceptible-Infectious-Removed” (SIR) and “Susceptible-Exposed-Infectious-Removed” (SEIR) epidemic models, unifying and generalizing previous uses of networks and branching processes to analyze mass-action and network-based S(E)IR models. This paper explains the fundamental concepts underlying the definition and use of EPNs, using them to build intuition about the final outcomes of epidemics. We then show how EPNs provide a novel and useful perspective on the design of vaccination strategies. PMID:21437002
Memristive behavior of the SnO2/TiO2 interface deposited by sol-gel
NASA Astrophysics Data System (ADS)
Boratto, Miguel H.; Ramos, Roberto A.; Congiu, Mirko; Graeff, Carlos F. O.; Scalvi, Luis V. A.
2017-07-01
A novel and cheap Resistive Random Access Memory (RRAM) device is proposed within this work, based on the interface between antimony doped Tin Oxide (4%at Sb:SnO2) and Titanium Oxide (TiO2) thin films, entirely prepared through a low-temperature sol-gel process. The device was fabricated on glass slides using evaporated aluminum electrodes. Typical bipolar memristive behavior under cyclic voltage sweeping and square wave voltages, with well-defined high and low resistance states (HRS and LRS), and set and reset voltages are shown in our samples. The switching mechanism, explained by charges trapping/de-trapping by defects in the SnO2/TiO2 interface, is mainly driven by the external electric field. The calculated on/off ratio was about 8 × 102 in best conditions with good reproducibility over repeated measurement cycles under cyclic voltammetry and about 102 under applied square wave voltage.
Sprayable superhydrophobic nano-chains coating with continuous self-jumping of dew and melting frost
Wang, Shanlin; Zhang, Wenwen; Yu, Xinquan; Liang, Caihua; Zhang, Youfa
2017-01-01
Spontaneous movement of condensed matter provides a new insight to efficiently improve condensation heat transfer on superhydrophobic surface. However, very few reports have shown the jumping behaviors on the sprayable superhydrophobic coatings. Here, we developed a sprayable silica nano-porous coating assembled by fluorinated nano-chains to survey the condensates’ dynamics. The dewdrops were continuously removed by self- and/or trigger-propelling motion due to abundant nano-pores from random multilayer stacking of nano-chains. In comparison, the dewdrops just could be slipped under the gravity effect on lack of nano-pores coatings stacked by silica nano-spheres and nano-aggregates. More interestingly, the spontaneous jumping effect also occurred on micro-scale frost crystals under the defrosting process on nano-chains coating surfaces. Different from self-jumping of dewdrops motion, the propelling force of frost crystals were provided by a sudden increase of the pressure under the frost crystal. PMID:28074938
Physical layer one-time-pad data encryption through synchronized semiconductor laser networks
NASA Astrophysics Data System (ADS)
Argyris, Apostolos; Pikasis, Evangelos; Syvridis, Dimitris
2016-02-01
Semiconductor lasers (SL) have been proven to be a key device in the generation of ultrafast true random bit streams. Their potential to emit chaotic signals under conditions with desirable statistics, establish them as a low cost solution to cover various needs, from large volume key generation to real-time encrypted communications. Usually, only undemanding post-processing is needed to convert the acquired analog timeseries to digital sequences that pass all established tests of randomness. A novel architecture that can generate and exploit these true random sequences is through a fiber network in which the nodes are semiconductor lasers that are coupled and synchronized to central hub laser. In this work we show experimentally that laser nodes in such a star network topology can synchronize with each other through complex broadband signals that are the seed to true random bit sequences (TRBS) generated at several Gb/s. The potential for each node to access real-time generated and synchronized with the rest of the nodes random bit streams, through the fiber optic network, allows to implement an one-time-pad encryption protocol that mixes the synchronized true random bit sequence with real data at Gb/s rates. Forward-error correction methods are used to reduce the errors in the TRBS and the final error rate at the data decoding level. An appropriate selection in the sampling methodology and properties, as well as in the physical properties of the chaotic seed signal through which network locks in synchronization, allows an error free performance.
Security and composability of randomness expansion from Bell inequalities
NASA Astrophysics Data System (ADS)
Fehr, Serge; Gelles, Ran; Schaffner, Christian
2013-01-01
The nonlocal behavior of quantum mechanics can be used to generate guaranteed fresh randomness from an untrusted device that consists of two nonsignalling components; since the generation process requires some initial fresh randomness to act as a catalyst, one also speaks of randomness expansion. R. Colbeck and A. Kent [J. Phys. A1751-811310.1088/1751-8113/44/9/095305 44, 095305 (2011)] proposed the first method for generating randomness from untrusted devices, but without providing a rigorous analysis. This was addressed subsequently by S. Pironio [Nature (London)NATUAS0028-083610.1038/nature09008 464, 1021 (2010)], who aimed at deriving a lower bound on the min-entropy of the data extracted from an untrusted device based only on the observed nonlocal behavior of the device. Although that article succeeded in developing important tools for reaching the stated goal, the proof itself contained a bug, and the given formal claim on the guaranteed amount of min-entropy needs to be revisited. In this paper we build on the tools provided by Pironio and obtain a meaningful lower bound on the min-entropy of the data produced by an untrusted device based on the observed nonlocal behavior of the device. Our main result confirms the essence of the (improperly formulated) claims of Pironio and puts them on solid ground. We also address the question of composability and show that different untrusted devices can be composed in an alternating manner under the assumption that they are not entangled. This enables superpolynomial randomness expansion based on two untrusted yet unentangled devices.
Membrane Diffusion Occurs by Continuous-Time Random Walk Sustained by Vesicular Trafficking.
Goiko, Maria; de Bruyn, John R; Heit, Bryan
2018-06-19
Diffusion in cellular membranes is regulated by processes that occur over a range of spatial and temporal scales. These processes include membrane fluidity, interprotein and interlipid interactions, interactions with membrane microdomains, interactions with the underlying cytoskeleton, and cellular processes that result in net membrane movement. The complex, non-Brownian diffusion that results from these processes has been difficult to characterize, and moreover, the impact of factors such as membrane recycling on membrane diffusion remains largely unexplored. We have used a careful statistical analysis of single-particle tracking data of the single-pass plasma membrane protein CD93 to show that the diffusion of this protein is well described by a continuous-time random walk in parallel with an aging process mediated by membrane corrals. The overall result is an evolution in the diffusion of CD93: proteins initially diffuse freely on the cell surface but over time become increasingly trapped within diffusion-limiting membrane corrals. Stable populations of freely diffusing and corralled CD93 are maintained by an endocytic/exocytic process in which corralled CD93 is selectively endocytosed, whereas freely diffusing CD93 is replenished by exocytosis of newly synthesized and recycled CD93. This trafficking not only maintained CD93 diffusivity but also maintained the heterogeneous distribution of CD93 in the plasma membrane. These results provide insight into the nature of the biological and biophysical processes that can lead to significantly non-Brownian diffusion of membrane proteins and demonstrate that ongoing membrane recycling is critical to maintaining steady-state diffusion and distribution of proteins in the plasma membrane. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Groza, Tudor; Verspoor, Karin
2015-01-01
Concept recognition (CR) is a foundational task in the biomedical domain. It supports the important process of transforming unstructured resources into structured knowledge. To date, several CR approaches have been proposed, most of which focus on a particular set of biomedical ontologies. Their underlying mechanisms vary from shallow natural language processing and dictionary lookup to specialized machine learning modules. However, no prior approach considers the case sensitivity characteristics and the term distribution of the underlying ontology on the CR process. This article proposes a framework that models the CR process as an information retrieval task in which both case sensitivity and the information gain associated with tokens in lexical representations (e.g., term labels, synonyms) are central components of a strategy for generating term variants. The case sensitivity of a given ontology is assessed based on the distribution of so-called case sensitive tokens in its terms, while information gain is modelled using a combination of divergence from randomness and mutual information. An extensive evaluation has been carried out using the CRAFT corpus. Experimental results show that case sensitivity awareness leads to an increase of up to 0.07 F1 against a non-case sensitive baseline on the Protein Ontology and GO Cellular Component. Similarly, the use of information gain leads to an increase of up to 0.06 F1 against a standard baseline in the case of GO Biological Process and Molecular Function and GO Cellular Component. Overall, subject to the underlying token distribution, these methods lead to valid complementary strategies for augmenting term label sets to improve concept recognition.
Wu, Liang; Chen, Pu; Dong, Yingsong; Feng, Xiaojun; Liu, Bi-Feng
2013-06-01
Encapsulation of single cells is a challenging task in droplet microfluidics due to the random compartmentalization of cells dictated by Poisson statistics. In this paper, a microfluidic device was developed to improve the single-cell encapsulation rate by integrating droplet generation with fluorescence-activated droplet sorting. After cells were loaded into aqueous droplets by hydrodynamic focusing, an on-flight fluorescence-activated sorting process was conducted to isolate droplets containing one cell. Encapsulation of fluorescent polystyrene beads was investigated to evaluate the developed method. A single-bead encapsulation rate of more than 98 % was achieved under the optimized conditions. Application to encapsulate single HeLa cells was further demonstrated with a single-cell encapsulation rate of 94.1 %, which is about 200 % higher than those obtained by random compartmentalization. We expect this new method to provide a useful platform for encapsulating single cells, facilitating the development of high-throughput cell-based assays.
Using Acceptance and Commitment Therapy to Increase Self-Compassion: A Randomized Controlled Trial
Yadavaia, James E.; Hayes, Steven C.; Vilardaga, Roger
2014-01-01
Self-compassion has been shown to be related to several types of psychopathology, including traumatic stress, and has been shown to improve in response to various kinds of interventions. Current conceptualizations of self-compassion fit well with the psychological flexibility model, which underlies acceptance and commitment therapy (ACT). However, there has been no research on ACT interventions specifically aimed at self-compassion. This randomized trial therefore compared a 6-hour ACT-based workshop targeting self-compassion to a wait-list control. From pretreatment to 2-month follow-up, ACT was significantly superior to the control condition in self-compassion, general psychological distress, and anxiety. Process analyses revealed psychological flexibility to be a significant mediator of changes in self-compassion, general psychological distress, depression, anxiety, and stress. Exploratory moderation analyses revealed the intervention to be of more benefit in terms of depression, anxiety, and stress to those with greater trauma history. PMID:25506545
Fabrication of Bi2223 bulks with high critical current properties sintered in Ag tubes
NASA Astrophysics Data System (ADS)
Takeda, Yasuaki; Shimoyama, Jun-ichi; Motoki, Takanori; Kishio, Kohji; Nakashima, Takayoshi; Kagiyama, Tomohiro; Kobayashi, Shin-ichi; Hayashi, Kazuhiko
2017-03-01
Randomly grain oriented Bi2223 sintered bulks are one of the representative superconducting materials having weak-link problem due to very short coherence length particularly along the c-axis, resulting in poor intergrain Jc properties. In our previous studies, sintering and/or post-annealing under moderately reducing atmospheres were found to be effective for improving grain coupling in Bi2223 sintered bulks. Further optimizations of the synthesis process for Bi2223 sintered bulks were attempted in the present study to enhance their intergrain Jc. Effects of applied pressure of uniaxial pressing and sintering conditions on microstructure and superconducting properties have been systematically investigated. The best sample showed intergrain Jc of 2.0 kA cm-2 at 77 K and 8.2 kA cm-2 at 20 K, while its relative density was low ∼65%. These values are quite high as for a randomly oriented sintered bulk of cuprate superconductors.
An improved genetic algorithm and its application in the TSP problem
NASA Astrophysics Data System (ADS)
Li, Zheng; Qin, Jinlei
2011-12-01
Concept and research actuality of genetic algorithm are introduced in detail in the paper. Under this condition, the simple genetic algorithm and an improved algorithm are described and applied in an example of TSP problem, where the advantage of genetic algorithm is adequately shown in solving the NP-hard problem. In addition, based on partial matching crossover operator, the crossover operator method is improved into extended crossover operator in order to advance the efficiency when solving the TSP. In the extended crossover method, crossover operator can be performed between random positions of two random individuals, which will not be restricted by the position of chromosome. Finally, the nine-city TSP is solved using the improved genetic algorithm with extended crossover method, the efficiency of whose solution process is much higher, besides, the solving speed of the optimal solution is much faster.
Using Acceptance and Commitment Therapy to Increase Self-Compassion: A Randomized Controlled Trial.
Yadavaia, James E; Hayes, Steven C; Vilardaga, Roger
2014-10-01
Self-compassion has been shown to be related to several types of psychopathology, including traumatic stress, and has been shown to improve in response to various kinds of interventions. Current conceptualizations of self-compassion fit well with the psychological flexibility model, which underlies acceptance and commitment therapy (ACT). However, there has been no research on ACT interventions specifically aimed at self-compassion. This randomized trial therefore compared a 6-hour ACT-based workshop targeting self-compassion to a wait-list control. From pretreatment to 2-month follow-up, ACT was significantly superior to the control condition in self-compassion, general psychological distress, and anxiety. Process analyses revealed psychological flexibility to be a significant mediator of changes in self-compassion, general psychological distress, depression, anxiety, and stress. Exploratory moderation analyses revealed the intervention to be of more benefit in terms of depression, anxiety, and stress to those with greater trauma history.
Experimental vibroacoustic testing of plane panels using synthesized random pressure fields.
Robin, Olivier; Berry, Alain; Moreau, Stéphane
2014-06-01
The experimental reproduction of random pressure fields on a plane panel and corresponding induced vibrations is studied. An open-loop reproduction strategy is proposed that uses the synthetic array concept, for which a small array element is moved to create a large array by post-processing. Three possible approaches are suggested to define the complex amplitudes to be imposed to the reproduction sources distributed on a virtual plane facing the panel to be tested. Using a single acoustic monopole, a scanning laser vibrometer and a baffled simply supported aluminum panel, experimental vibroacoustic indicators such as the Transmission Loss for Diffuse Acoustic Field, high-speed subsonic and supersonic Turbulent Boundary Layer excitations are obtained. Comparisons with simulation results obtained using a commercial software show that the Transmission Loss estimation is possible under both excitations. Moreover and as a complement to frequency domain indicators, the vibroacoustic behavior of the panel can be studied in the wave number domain.
Enhancement of Spike Synchrony in Hindmarsh-Rose Neural Networks by Randomly Rewiring Connections
NASA Astrophysics Data System (ADS)
Yang, Renhuan; Song, Aiguo; Yuan, Wujie
Spike synchrony of the neural system is thought to have very dichotomous roles. On the one hand, it is ubiquitously present in the healthy brain and is thought to underlie feature binding during information processing. On the other hand, large scale synchronization is an underlying mechanism of epileptic seizures. In this paper, we investigate the spike synchrony of Hindmarsh-Rose (HR) neural networks. Our focus is the influence of the network connections on the spike synchrony of the neural networks. The simulations show that desynchronization in the nearest-neighbor coupled network evolves into accurate synchronization with connection-rewiring probability p increasing. We uncover a phenomenon of enhancement of spike synchrony by randomly rewiring connections. With connection strength c and average connection number m increasing spike synchrony is enhanced but it is not the whole story. Furthermore, the possible mechanism behind such synchronization is also addressed.
NASA Astrophysics Data System (ADS)
Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan
2018-03-01
We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.
Some Aspects of the Investigation of Random Vibration Influence on Ride Comfort
NASA Astrophysics Data System (ADS)
DEMIĆ, M.; LUKIĆ, J.; MILIĆ, Ž.
2002-05-01
Contemporary vehicles must satisfy high ride comfort criteria. This paper attempts to develop criteria for ride comfort improvement. The highest loading levels have been found to be in the vertical direction and the lowest in lateral direction in passenger cars and trucks. These results have formed the basis for further laboratory and field investigations. An investigation of the human body behaviour under random vibrations is reported in this paper. The research included two phases; biodynamic research and ride comfort investigation. A group of 30 subjects was tested. The influence of broadband random vibrations on the human body was examined through the seat-to-head transmissibility function (STHT). Initially, vertical and fore and aft vibrations were considered. Multi-directional vibration was also investigated. In the biodynamic research, subjects were exposed to 0·55, 1·75 and 2·25 m/s2 r.m.s. vibration levels in the 0·5- 40 Hz frequency domain. The influence of sitting position on human body behaviour under two axial vibrations was also examined. Data analysis showed that the human body behaviour under two-directional random vibrations could not be approximated by superposition of one-directional random vibrations. Non-linearity of the seated human body in the vertical and fore and aft directions was observed. Seat-backrest angle also influenced STHT. In the second phase of experimental research, a new method for the assessment of the influence of narrowband random vibration on the human body was formulated and tested. It included determination of equivalent comfort curves in the vertical and fore and aft directions under one- and two-directional narrowband random vibrations. Equivalent comfort curves for durations of 2·5, 4 and 8 h were determined.
Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial
ERIC Educational Resources Information Center
Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.
2016-01-01
This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual…
Random Error in Judgment: The Contribution of Encoding and Retrieval Processes
ERIC Educational Resources Information Center
Pleskac, Timothy J.; Dougherty, Michael R.; Rivadeneira, A. Walkyria; Wallsten, Thomas S.
2009-01-01
Theories of confidence judgments have embraced the role random error plays in influencing responses. An important next step is to identify the source(s) of these random effects. To do so, we used the stochastic judgment model (SJM) to distinguish the contribution of encoding and retrieval processes. In particular, we investigated whether dividing…
Mulder, Willem H; Crawford, Forrest W
2015-01-07
Efforts to reconstruct phylogenetic trees and understand evolutionary processes depend fundamentally on stochastic models of speciation and mutation. The simplest continuous-time model for speciation in phylogenetic trees is the Yule process, in which new species are "born" from existing lineages at a constant rate. Recent work has illuminated some of the structural properties of Yule trees, but it remains mostly unknown how these properties affect sequence and trait patterns observed at the tips of the phylogenetic tree. Understanding the interplay between speciation and mutation under simple models of evolution is essential for deriving valid phylogenetic inference methods and gives insight into the optimal design of phylogenetic studies. In this work, we derive the probability distribution of interspecies covariance under Brownian motion and Ornstein-Uhlenbeck models of phenotypic change on a Yule tree. We compute the probability distribution of the number of mutations shared between two randomly chosen taxa in a Yule tree under discrete Markov mutation models. Our results suggest summary measures of phylogenetic information content, illuminate the correlation between site patterns in sequences or traits of related organisms, and provide heuristics for experimental design and reconstruction of phylogenetic trees. Copyright © 2014 Elsevier Ltd. All rights reserved.
Trial-by-trial fluctuations in CNV amplitude reflect anticipatory adjustment of response caution.
Boehm, Udo; van Maanen, Leendert; Forstmann, Birte; van Rijn, Hedderik
2014-08-01
The contingent negative variation, a slow cortical potential, occurs when humans are warned by a stimulus about an upcoming task. The cognitive processes that give rise to this EEG potential are not yet well understood. To explain these processes, we adopt a recently developed theoretical framework from the area of perceptual decision-making. This framework assumes that the basal ganglia control the tradeoff between fast and accurate decision-making in the cortex. It suggests that an increase in cortical excitability serves to lower response caution, which results in faster but more error prone responding. We propose that the CNV reflects this increased cortical excitability. To test this hypothesis, we conducted an EEG experiment in which participants performed the random dot motion task either under speed or under accuracy stress. Our results show that trial-by-trial fluctuations in participants' response speed as well as model-based estimates of response caution correlated with single-trial CNV amplitude under conditions of speed but not accuracy stress. We conclude that the CNV might reflect adjustments of response caution, which serves to enhance quick decision-making. Copyright © 2014 Elsevier Inc. All rights reserved.
Dopaminergic influences on formation of a motor memory.
Flöel, Agnes; Breitenstein, Caterina; Hummel, Friedhelm; Celnik, Pablo; Gingert, Christian; Sawaki, Lumy; Knecht, Stefan; Cohen, Leonardo G
2005-07-01
The ability of the central nervous system to form motor memories, a process contributing to motor learning and skill acquisition, decreases with age. Dopaminergic activity, one of the mechanisms implicated in memory formation, experiences a similar decline with aging. It is possible that restoring dopaminergic function in elderly adults could lead to improved formation of motor memories with training. We studied the influence of a single oral dose of levodopa (100mg) administered preceding training on the ability to encode an elementary motor memory in the primary motor cortex of elderly and young healthy volunteers in a randomized, double-blind, placebo-controlled design. Attention to the task and motor training kinematics were comparable across age groups and sessions. In young subjects, encoding a motor memory under placebo was more prominent than in older subjects, and the encoding process was accelerated by intake of levodopa. In the elderly group, diminished motor memory encoding under placebo was enhanced by intake of levodopa to levels present in younger subjects. Therefore, upregulation of dopaminergic activity accelerated memory formation in young subjects and restored the ability to form a motor memory in elderly subjects; possible mechanisms underlying the beneficial effects of dopaminergic agents on motor learning in neurorehabilitation.
NASA Astrophysics Data System (ADS)
Boche, Holger; Cai, Minglai; Deppe, Christian; Nötzel, Janis
2017-10-01
We analyze arbitrarily varying classical-quantum wiretap channels. These channels are subject to two attacks at the same time: one passive (eavesdropping) and one active (jamming). We elaborate on our previous studies [H. Boche et al., Quantum Inf. Process. 15(11), 4853-4895 (2016) and H. Boche et al., Quantum Inf. Process. 16(1), 1-48 (2016)] by introducing a reduced class of allowable codes that fulfills a more stringent secrecy requirement than earlier definitions. In addition, we prove that non-symmetrizability of the legal link is sufficient for equality of the deterministic and the common randomness assisted secrecy capacities. Finally, we focus on analytic properties of both secrecy capacities: We completely characterize their discontinuity points and their super-activation properties.
Culture, attention, and emotion.
Grossmann, Igor; Ellsworth, Phoebe C; Hong, Ying-yi
2012-02-01
This research provides experimental evidence for cultural influence on one of the most basic elements of emotional processing: attention to positive versus negative stimuli. To this end, we focused on Russian culture, which is characterized by brooding and melancholy. In Study 1, Russians spent significantly more time looking at negative than positive pictures, whereas Americans did not show this tendency. In Study 2, Russian Latvians were randomly primed with symbols of each culture, after which we measured the speed of recognition for positive versus negative trait words. Biculturals were significantly faster in recognizing negative words (as compared with baseline) when primed with Russian versus Latvian cultural symbols. Greater identification with Russian culture facilitated this effect. We provide a theoretical discussion of mental processes underlying cultural differences in emotion research.
Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images
Zhou, Mingyuan; Chen, Haojun; Paisley, John; Ren, Lu; Li, Lingbo; Xing, Zhengming; Dunson, David; Sapiro, Guillermo; Carin, Lawrence
2013-01-01
Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. PMID:21693421
Mean-Field-Game Model for Botnet Defense in Cyber-Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolokoltsov, V. N., E-mail: v.kolokoltsov@warwick.ac.uk; Bensoussan, A.
We initiate the analysis of the response of computer owners to various offers of defence systems against a cyber-hacker (for instance, a botnet attack), as a stochastic game of a large number of interacting agents. We introduce a simple mean-field game that models their behavior. It takes into account both the random process of the propagation of the infection (controlled by the botner herder) and the decision making process of customers. Its stationary version turns out to be exactly solvable (but not at all trivial) under an additional natural assumption that the execution time of the decisions of the customersmore » (say, switch on or out the defence system) is much faster that the infection rates.« less
Massively parallel processor computer
NASA Technical Reports Server (NTRS)
Fung, L. W. (Inventor)
1983-01-01
An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-05-01
Detrended fluctuation analysis (DFA) is used to examine long-range dependence in variations and volatilities of American treasury bills (TB) during periods of low and high movements in TB rates. Volatility series are estimated by generalized autoregressive conditional heteroskedasticity (GARCH) model under Gaussian, Student, and the generalized error distribution (GED) assumptions. The DFA-based Hurst exponents from 3-month, 6-month, and 1-year TB data indicates that in general the dynamics of the TB variations process is characterized by persistence during stable time period (before 2008 international financial crisis) and anti-persistence during unstable time period (post-2008 international financial crisis). For volatility series, it is found that; for stable period; 3-month volatility process is more likely random, 6-month volatility process is anti-persistent, and 1-year volatility process is persistent. For unstable period, estimation results show that the generating process is persistent for all maturities and for all distributional assumptions.
The Statistical Power of the Cluster Randomized Block Design with Matched Pairs--A Simulation Study
ERIC Educational Resources Information Center
Dong, Nianbo; Lipsey, Mark
2010-01-01
This study uses simulation techniques to examine the statistical power of the group- randomized design and the matched-pair (MP) randomized block design under various parameter combinations. Both nearest neighbor matching and random matching are used for the MP design. The power of each design for any parameter combination was calculated from…
A high speed implementation of the random decrement algorithm
NASA Technical Reports Server (NTRS)
Kiraly, L. J.
1982-01-01
The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.
Statistical mechanics of scale-free gene expression networks
NASA Astrophysics Data System (ADS)
Gross, Eitan
2012-12-01
The gene co-expression networks of many organisms including bacteria, mice and man exhibit scale-free distribution. This heterogeneous distribution of connections decreases the vulnerability of the network to random attacks and thus may confer the genetic replication machinery an intrinsic resilience to such attacks, triggered by changing environmental conditions that the organism may be subject to during evolution. This resilience to random attacks comes at an energetic cost, however, reflected by the lower entropy of the scale-free distribution compared to the more homogenous, random network. In this study we found that the cell cycle-regulated gene expression pattern of the yeast Saccharomyces cerevisiae obeys a power-law distribution with an exponent α = 2.1 and an entropy of 1.58. The latter is very close to the maximal value of 1.65 obtained from linear optimization of the entropy function under the constraint of a constant cost function, determined by the average degree connectivity
Optimization and universality of Brownian search in a basic model of quenched heterogeneous media
NASA Astrophysics Data System (ADS)
Godec, Aljaž; Metzler, Ralf
2015-05-01
The kinetics of a variety of transport-controlled processes can be reduced to the problem of determining the mean time needed to arrive at a given location for the first time, the so-called mean first-passage time (MFPT) problem. The occurrence of occasional large jumps or intermittent patterns combining various types of motion are known to outperform the standard random walk with respect to the MFPT, by reducing oversampling of space. Here we show that a regular but spatially heterogeneous random walk can significantly and universally enhance the search in any spatial dimension. In a generic minimal model we consider a spherically symmetric system comprising two concentric regions with piecewise constant diffusivity. The MFPT is analyzed under the constraint of conserved average dynamics, that is, the spatially averaged diffusivity is kept constant. Our analytical calculations and extensive numerical simulations demonstrate the existence of an optimal heterogeneity minimizing the MFPT to the target. We prove that the MFPT for a random walk is completely dominated by what we term direct trajectories towards the target and reveal a remarkable universality of the spatially heterogeneous search with respect to target size and system dimensionality. In contrast to intermittent strategies, which are most profitable in low spatial dimensions, the spatially inhomogeneous search performs best in higher dimensions. Discussing our results alongside recent experiments on single-particle tracking in living cells, we argue that the observed spatial heterogeneity may be beneficial for cellular signaling processes.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Zeiler, Thomas A.; Perry, Boyd, III
1989-01-01
This paper describes and illustrates two ways of performing time-correlated gust-load calculations. The first is based on Matched Filter Theory; the second on Random Process Theory. Both approaches yield theoretically identical results and represent novel applications of the theories, are computationally fast, and may be applied to other dynamic-response problems. A theoretical development and example calculations using both Matched Filter Theory and Random Process Theory approaches are presented.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Zeiler, Thomas A.; Perry, Boyd, III
1989-01-01
Two ways of performing time-correlated gust-load calculations are described and illustrated. The first is based on Matched Filter Theory; the second on Random Process Theory. Both approaches yield theoretically identical results and represent novel applications of the theories, are computationally fast, and may be applied to other dynamic-response problems. A theoretical development and example calculations using both Matched Filter Theory and Random Process Theory approaches are presented.
NASA Astrophysics Data System (ADS)
Ordóñez Cabrera, Manuel; Volodin, Andrei I.
2005-05-01
From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.
Random close packing of polydisperse jammed emulsions
NASA Astrophysics Data System (ADS)
Brujic, Jasna
2010-03-01
Packing problems are everywhere, ranging from oil extraction through porous rocks to grain storage in silos and the compaction of pharmaceutical powders into tablets. At a given density, particulate systems pack into a mechanically stable and amorphous jammed state. Theoretical frameworks have proposed a connection between this jammed state and the glass transition, a thermodynamics of jamming, as well as geometric modeling of random packings. Nevertheless, a simple underlying mechanism for the random assembly of athermal particles, analogous to crystalline ordering, remains unknown. Here we use 3D measurements of polydisperse packings of emulsion droplets to build a simple statistical model in which the complexity of the global packing is distilled into a local stochastic process. From the perspective of a single particle the packing problem is reduced to the random formation of nearest neighbors, followed by a choice of contacts among them. The two key parameters in the model, the available space around a particle and the ratio of contacts to neighbors, are directly obtained from experiments. Remarkably, we demonstrate that this ``granocentric'' view captures the properties of the polydisperse emulsion packing, ranging from the microscopic distributions of nearest neighbors and contacts to local density fluctuations and all the way to the global packing density. Further applications to monodisperse and bidisperse systems quantitatively agree with previously measured trends in global density. This model therefore reveals a general principle of organization for random packing and lays the foundations for a theory of jammed matter.
NASA Astrophysics Data System (ADS)
Tatlier, Mehmet Seha
Random fibrous can be found among natural and synthetic materials. Some of these random fibrous networks possess negative Poisson's ratio and they are extensively called auxetic materials. The governing mechanisms behind this counter intuitive property in random networks are yet to be understood and this kind of auxetic material remains widely under-explored. However, most of synthetic auxetic materials suffer from their low strength. This shortcoming can be rectified by developing high strength auxetic composites. The process of embedding auxetic random fibrous networks in a polymer matrix is an attractive alternate route to the manufacture of auxetic composites, however before such an approach can be developed, a methodology for designing fibrous networks with the desired negative Poisson's ratios must first be established. This requires an understanding of the factors which bring about negative Poisson's ratios in these materials. In this study, a numerical model is presented in order to investigate the auxetic behavior in compressed random fiber networks. Finite element analyses of three-dimensional stochastic fiber networks were performed to gain insight into the effects of parameters such as network anisotropy, network density, and degree of network compression on the out-of-plane Poisson's ratio and Young's modulus. The simulation results suggest that the compression is the critical parameter that gives rise to negative Poisson's ratio while anisotropy significantly promotes the auxetic behavior. This model can be utilized to design fibrous auxetic materials and to evaluate feasibility of developing auxetic composites by using auxetic fibrous networks as the reinforcing layer.
Random Attractors for the Stochastic Navier-Stokes Equations on the 2D Unit Sphere
NASA Astrophysics Data System (ADS)
Brzeźniak, Z.; Goldys, B.; Le Gia, Q. T.
2018-03-01
In this paper we prove the existence of random attractors for the Navier-Stokes equations on 2 dimensional sphere under random forcing irregular in space and time. We also deduce the existence of an invariant measure.
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses
Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli
2015-01-01
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858
Shabbir, Javid
2018-01-01
In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519
Li, Mengmeng; Feng, Qiang; Yang, Dezhen
2018-01-01
In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion. PMID:29584695
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
Negative ion treatment increases positive emotional processing in seasonal affective disorder.
Harmer, C J; Charles, M; McTavish, S; Favaron, E; Cowen, P J
2012-08-01
Antidepressant drug treatments increase the processing of positive compared to negative affective information early in treatment. Such effects have been hypothesized to play a key role in the development of later therapeutic responses to treatment. However, it is unknown whether these effects are a common mechanism of action for different treatment modalities. High-density negative ion (HDNI) treatment is an environmental manipulation that has efficacy in randomized clinical trials in seasonal affective disorder (SAD). The current study investigated whether a single session of HDNI treatment could reverse negative affective biases seen in seasonal depression using a battery of emotional processing tasks in a double-blind, placebo-controlled randomized study. Under placebo conditions, participants with seasonal mood disturbance showed reduced recognition of happy facial expressions, increased recognition memory for negative personality characteristics and increased vigilance to masked presentation of negative words in a dot-probe task compared to matched healthy controls. Negative ion treatment increased the recognition of positive compared to negative facial expression and improved vigilance to unmasked stimuli across participants with seasonal depression and healthy controls. Negative ion treatment also improved recognition memory for positive information in the SAD group alone. These effects were seen in the absence of changes in subjective state or mood. These results are consistent with the hypothesis that early change in emotional processing may be an important mechanism for treatment action in depression and suggest that these effects are also apparent with negative ion treatment in seasonal depression.
Nouchi, Rui; Taki, Yasuyuki; Takeuchi, Hikaru; Nozawa, Takayuki; Sekiguchi, Atsushi; Kawashima, Ryuta
2016-01-01
Background: Previous reports have described that simple cognitive training using reading aloud and solving simple arithmetic calculations, so-called “learning therapy”, can improve executive functions and processing speed in the older adults. Nevertheless, it is not well-known whether learning therapy improve a wide range of cognitive functions or not. We investigated the beneficial effects of learning therapy on various cognitive functions in healthy older adults. Methods: We used a single-blinded intervention with two groups (learning therapy group: LT and waiting list control group: WL). Sixty-four elderly were randomly assigned to LT or WL. In LT, participants performed reading Japanese aloud and solving simple calculations training tasks for 6 months. WL did not participate in the intervention. We measured several cognitive functions before and after 6 months intervention periods. Results: Compared to WL, results revealed that LT improved inhibition performance in executive functions (Stroop: LT (Mean = 3.88) vs. WL (Mean = 1.22), adjusted p = 0.013 and reverse Stroop LT (Mean = 3.22) vs. WL (Mean = 1.59), adjusted p = 0.015), verbal episodic memory (Logical Memory (LM): LT (Mean = 4.59) vs. WL (Mean = 2.47), adjusted p = 0.015), focus attention (D-CAT: LT (Mean = 2.09) vs. WL (Mean = −0.59), adjusted p = 0.010) and processing speed compared to the WL control group (digit symbol coding: LT (Mean = 5.00) vs. WL (Mean = 1.13), adjusted p = 0.015 and Symbol Search (SS): LT (Mean = 3.47) vs. WL (Mean = 1.81), adjusted p = 0.014). Discussion: This randomized controlled trial (RCT) can be showed the benefit of LT on inhibition of executive functions, verbal episodic memory, focus attention and processing speed in healthy elderly people. Our results were discussed under overlapping hypothesis. PMID:27242481
Nouchi, Rui; Taki, Yasuyuki; Takeuchi, Hikaru; Nozawa, Takayuki; Sekiguchi, Atsushi; Kawashima, Ryuta
2016-01-01
Previous reports have described that simple cognitive training using reading aloud and solving simple arithmetic calculations, so-called "learning therapy", can improve executive functions and processing speed in the older adults. Nevertheless, it is not well-known whether learning therapy improve a wide range of cognitive functions or not. We investigated the beneficial effects of learning therapy on various cognitive functions in healthy older adults. We used a single-blinded intervention with two groups (learning therapy group: LT and waiting list control group: WL). Sixty-four elderly were randomly assigned to LT or WL. In LT, participants performed reading Japanese aloud and solving simple calculations training tasks for 6 months. WL did not participate in the intervention. We measured several cognitive functions before and after 6 months intervention periods. Compared to WL, results revealed that LT improved inhibition performance in executive functions (Stroop: LT (Mean = 3.88) vs. WL (Mean = 1.22), adjusted p = 0.013 and reverse Stroop LT (Mean = 3.22) vs. WL (Mean = 1.59), adjusted p = 0.015), verbal episodic memory (Logical Memory (LM): LT (Mean = 4.59) vs. WL (Mean = 2.47), adjusted p = 0.015), focus attention (D-CAT: LT (Mean = 2.09) vs. WL (Mean = -0.59), adjusted p = 0.010) and processing speed compared to the WL control group (digit symbol coding: LT (Mean = 5.00) vs. WL (Mean = 1.13), adjusted p = 0.015 and Symbol Search (SS): LT (Mean = 3.47) vs. WL (Mean = 1.81), adjusted p = 0.014). This randomized controlled trial (RCT) can be showed the benefit of LT on inhibition of executive functions, verbal episodic memory, focus attention and processing speed in healthy elderly people. Our results were discussed under overlapping hypothesis.
Statistical properties of several models of fractional random point processes
NASA Astrophysics Data System (ADS)
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Klumpers, Floris; Everaerd, Daphne; Kooijman, Sabine C.; van Wingen, Guido A.; Fernández, Guillén
2016-01-01
Stress exposure is known to precipitate psychological disorders. However, large differences exist in how individuals respond to stressful situations. A major marker for stress sensitivity is hypothalamus–pituitary–adrenal (HPA)-axis function. Here, we studied how interindividual variance in both basal cortisol levels and stress-induced cortisol responses predicts differences in neural vigilance processing during stress exposure. Implementing a randomized, counterbalanced, crossover design, 120 healthy male participants were exposed to a stress-induction and control procedure, followed by an emotional perception task (viewing fearful and happy faces) during fMRI scanning. Stress sensitivity was assessed using physiological (salivary cortisol levels) and psychological measures (trait questionnaires). High stress-induced cortisol responses were associated with increased stress sensitivity as assessed by psychological questionnaires, a stronger stress-induced increase in medial temporal activity and greater differential amygdala responses to fearful as opposed to happy faces under control conditions. In contrast, high basal cortisol levels were related to relative stress resilience as reflected by higher extraversion scores, a lower stress-induced increase in amygdala activity and enhanced differential processing of fearful compared with happy faces under stress. These findings seem to reflect a critical role for HPA-axis signaling in stress coping; higher basal levels indicate stress resilience, whereas higher cortisol responsivity to stress might facilitate recovery in those individuals prone to react sensitively to stress. PMID:26668010
Instrument Selection for Randomized Controlled Trials Why This and Not That?
Records, Kathie; Keller, Colleen; Ainsworth, Barbara; Permana, Paska
2011-01-01
A fundamental linchpin for obtaining rigorous findings in quantitative research involves the selection of survey instruments. Psychometric recommendations are available for the processes for scale development and testing and guidance for selection of established scales. These processes are necessary to address the validity link between the phenomena under investigation, the empirical measures and, ultimately, the theoretical ties between these and the world views of the participants. Detailed information is most often provided about study design and protocols, but far less frequently is a detailed theoretical explanation provided for why specific instruments are chosen. Guidance to inform choices is often difficult to find when scales are needed for specific cultural, ethnic, or racial groups. This paper details the rationale underlying instrument selection for measurement of the major processes (intervention, mediator and moderator variables, outcome variables) in an ongoing study of postpartum Latinas, Madres para la Salud [Mothers for Health]. The rationale underpinning our choices includes a discussion of alternatives, when appropriate. These exemplars may provide direction for other intervention researchers who are working with specific cultural, racial, or ethnic groups or for other investigators who are seeking to select the ‘best’ instrument. Thoughtful consideration of measurement and articulation of the rationale underlying our choices facilitates the maintenance of rigor within the study design and improves our ability to assess study outcomes. PMID:21986392
A random-censoring Poisson model for underreported data.
de Oliveira, Guilherme Lopes; Loschi, Rosangela Helena; Assunção, Renato Martins
2017-12-30
A major challenge when monitoring risks in socially deprived areas of under developed countries is that economic, epidemiological, and social data are typically underreported. Thus, statistical models that do not take the data quality into account will produce biased estimates. To deal with this problem, counts in suspected regions are usually approached as censored information. The censored Poisson model can be considered, but all censored regions must be precisely known a priori, which is not a reasonable assumption in most practical situations. We introduce the random-censoring Poisson model (RCPM) which accounts for the uncertainty about both the count and the data reporting processes. Consequently, for each region, we will be able to estimate the relative risk for the event of interest as well as the censoring probability. To facilitate the posterior sampling process, we propose a Markov chain Monte Carlo scheme based on the data augmentation technique. We run a simulation study comparing the proposed RCPM with 2 competitive models. Different scenarios are considered. RCPM and censored Poisson model are applied to account for potential underreporting of early neonatal mortality counts in regions of Minas Gerais State, Brazil, where data quality is known to be poor. Copyright © 2017 John Wiley & Sons, Ltd.
Modelling of Rail Vehicles and Track for Calculation of Ground-Vibration Transmission Into Buildings
NASA Astrophysics Data System (ADS)
Hunt, H. E. M.
1996-05-01
A methodology for the calculation of vibration transmission from railways into buildings is presented. The method permits existing models of railway vehicles and track to be incorporated and it has application to any model of vibration transmission through the ground. Special attention is paid to the relative phasing between adjacent axle-force inputs to the rail, so that vibration transmission may be calculated as a random process. The vehicle-track model is used in conjunction with a building model of infinite length. The tracking and building are infinite and parallel to each other and forces applied are statistically stationary in space so that vibration levels at any two points along the building are the same. The methodology is two-dimensional for the purpose of application of random process theory, but fully three-dimensional for calculation of vibration transmission from the track and through the ground into the foundations of the building. The computational efficiency of the method will interest engineers faced with the task of reducing vibration levels in buildings. It is possible to assess the relative merits of using rail pads, under-sleeper pads, ballast mats, floating-slab track or base isolation for particular applications.
Tigers on trails: occupancy modeling for cluster sampling.
Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U
2010-07-01
Occupancy modeling focuses on inference about the distribution of organisms over space, using temporal or spatial replication to allow inference about the detection process. Inference based on spatial replication strictly requires that replicates be selected randomly and with replacement, but the importance of these design requirements is not well understood. This paper focuses on an increasingly popular sampling design based on spatial replicates that are not selected randomly and that are expected to exhibit Markovian dependence. We develop two new occupancy models for data collected under this sort of design, one based on an underlying Markov model for spatial dependence and the other based on a trap response model with Markovian detections. We then simulated data under the model for Markovian spatial dependence and fit the data to standard occupancy models and to the two new models. Bias of occupancy estimates was substantial for the standard models, smaller for the new trap response model, and negligible for the new spatial process model. We also fit these models to data from a large-scale tiger occupancy survey recently conducted in Karnataka State, southwestern India. In addition to providing evidence of a positive relationship between tiger occupancy and habitat, model selection statistics and estimates strongly supported the use of the model with Markovian spatial dependence. This new model provides another tool for the decomposition of the detection process, which is sometimes needed for proper estimation and which may also permit interesting biological inferences. In addition to designs employing spatial replication, we note the likely existence of temporal Markovian dependence in many designs using temporal replication. The models developed here will be useful either directly, or with minor extensions, for these designs as well. We believe that these new models represent important additions to the suite of modeling tools now available for occupancy estimation in conservation monitoring. More generally, this work represents a contribution to the topic of cluster sampling for situations in which there is a need for specific modeling (e.g., reflecting dependence) for the distribution of the variable(s) of interest among subunits.
Temporal changes in randomness of bird communities across Central Europe.
Renner, Swen C; Gossner, Martin M; Kahl, Tiemo; Kalko, Elisabeth K V; Weisser, Wolfgang W; Fischer, Markus; Allan, Eric
2014-01-01
Many studies have examined whether communities are structured by random or deterministic processes, and both are likely to play a role, but relatively few studies have attempted to quantify the degree of randomness in species composition. We quantified, for the first time, the degree of randomness in forest bird communities based on an analysis of spatial autocorrelation in three regions of Germany. The compositional dissimilarity between pairs of forest patches was regressed against the distance between them. We then calculated the y-intercept of the curve, i.e. the 'nugget', which represents the compositional dissimilarity at zero spatial distance. We therefore assume, following similar work on plant communities, that this represents the degree of randomness in species composition. We then analysed how the degree of randomness in community composition varied over time and with forest management intensity, which we expected to reduce the importance of random processes by increasing the strength of environmental drivers. We found that a high portion of the bird community composition could be explained by chance (overall mean of 0.63), implying that most of the variation in local bird community composition is driven by stochastic processes. Forest management intensity did not consistently affect the mean degree of randomness in community composition, perhaps because the bird communities were relatively insensitive to management intensity. We found a high temporal variation in the degree of randomness, which may indicate temporal variation in assembly processes and in the importance of key environmental drivers. We conclude that the degree of randomness in community composition should be considered in bird community studies, and the high values we find may indicate that bird community composition is relatively hard to predict at the regional scale.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Structure and Randomness of Continuous-Time, Discrete-Event Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
2014-04-09
Excited by Input Random Processes Igor Baseski1,2, Dorin Drignei3, Zissimos P. Mourelatos1, Monica Majcher1 Oakland University, Rochester MI 48309 1...CONTRACT NUMBER W56HZV-04-2-0001 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Igor Baseski; Dorin Drignei; Zissimos Mourelatos; Monica
A stochastic-geometric model of soil variation in Pleistocene patterned ground
NASA Astrophysics Data System (ADS)
Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc
2013-04-01
In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
Erickson, Jennifer; Abbott, Kenneth; Susienka, Lucinda
2018-06-01
Homeless patients face a variety of obstacles in pursuit of basic social services. Acknowledging this, the Social Security Administration directs employees to prioritize homeless patients and handle their disability claims with special care. However, under existing manual processes for identification of homelessness, many homeless patients never receive the special service to which they are entitled. In this paper, we explore address validation and automatic annotation of electronic health records to improve identification of homeless patients. We developed a sample of claims containing medical records at the moment of arrival in a single office. Using address validation software, we reconciled patient addresses with public directories of homeless shelters, veterans' hospitals and clinics, and correctional facilities. Other tools annotated electronic health records. We trained random forests to identify homeless patients and validated each model with 10-fold cross validation. For our finished model, the area under the receiver operating characteristic curve was 0.942. The random forest improved sensitivity from 0.067 to 0.879 but decreased positive predictive value to 0.382. Presumed false positive classifications bore many characteristics of homelessness. Organizations could use these methods to prompt early collection of information necessary to avoid labor-intensive attempts to reestablish contact with homeless individuals. Annually, such methods could benefit tens of thousands of patients who are homeless, destitute, and in urgent need of assistance. We were able to identify many more homeless patients through a combination of automatic address validation and natural language processing of unstructured electronic health records. Copyright © 2018. Published by Elsevier Inc.
Facilitation of learning induced by both random and gradual visuomotor task variation
Braun, Daniel A.; Wolpert, Daniel M.
2012-01-01
Motor task variation has been shown to be a key ingredient in skill transfer, retention, and structural learning. However, many studies only compare training of randomly varying tasks to either blocked or null training, and it is not clear how experiencing different nonrandom temporal orderings of tasks might affect the learning process. Here we study learning in human subjects who experience the same set of visuomotor rotations, evenly spaced between −60° and +60°, either in a random order or in an order in which the rotation angle changed gradually. We compared subsequent learning of three test blocks of +30°→−30°→+30° rotations. The groups that underwent either random or gradual training showed significant (P < 0.01) facilitation of learning in the test blocks compared with a control group who had not experienced any visuomotor rotations before. We also found that movement initiation times in the random group during the test blocks were significantly (P < 0.05) lower than for the gradual or the control group. When we fit a state-space model with fast and slow learning processes to our data, we found that the differences in performance in the test block were consistent with the gradual or random task variation changing the learning and retention rates of only the fast learning process. Such adaptation of learning rates may be a key feature of ongoing meta-learning processes. Our results therefore suggest that both gradual and random task variation can induce meta-learning and that random learning has an advantage in terms of shorter initiation times, suggesting less reliance on cognitive processes. PMID:22131385
1983-12-01
rAD-141 333 NRRROWRAND (LPC-iB) VOCODER PERFORMANCE UNDER COMBINED i/ EFFECTS OF RRNDOM.(U) ROME AIR DEVELOPMENT CENTER GRIFFISS RFB NY C P SMITH DEC...LPC-10) VOCODER In House. PERFORMANCE UNDER COMBINED EFFECTS June 82 - Sept. 83 OF RANDOM BIT ERRORS AND JET AIRCRAFT Z PERFORMING ORG REPO- NUMSEF...PAGE(Wh.n Does Eneerd) 20. (contd) Compartment, and NCA Compartment were alike in their effects on overall vocoder performance . Composite performance
Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H
1985-03-01
Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.
van Atteveldt, Nienke; Musacchia, Gabriella; Zion-Golumbic, Elana; Sehatpour, Pejman; Javitt, Daniel C.; Schroeder, Charles
2015-01-01
The brain’s fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts. PMID:26579044
Resonance energy transfer process in nanogap-based dual-color random lasing
NASA Astrophysics Data System (ADS)
Shi, Xiaoyu; Tong, Junhua; Liu, Dahe; Wang, Zhaona
2017-04-01
The resonance energy transfer (RET) process between Rhodamine 6G and oxazine in the nanogap-based random systems is systematically studied by revealing the variations and fluctuations of RET coefficients with pump power density. Three working regions stable fluorescence, dynamic laser, and stable laser are thus demonstrated in the dual-color random systems. The stable RET coefficients in fluorescence and lasing regions are generally different and greatly dependent on the donor concentration and the donor-acceptor ratio. These results may provide a way to reveal the energy distribution regulars in the random system and to design the tunable multi-color coherent random lasers for colorful imaging.
Multiple Scattering in Random Mechanical Systems and Diffusion Approximation
NASA Astrophysics Data System (ADS)
Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun
2013-10-01
This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.
Stability and dynamical properties of material flow systems on random networks
NASA Astrophysics Data System (ADS)
Anand, K.; Galla, T.
2009-04-01
The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.
Sassani, Farrokh
2014-01-01
The simulation results for electromagnetic energy harvesters (EMEHs) under broad band stationary Gaussian random excitations indicate the importance of both a high transformation factor and a high mechanical quality factor to achieve favourable mean power, mean square load voltage, and output spectral density. The optimum load is different for random vibrations and for sinusoidal vibration. Reducing the total damping ratio under band-limited random excitation yields a higher mean square load voltage. Reduced bandwidth resulting from decreased mechanical damping can be compensated by increasing the electrical damping (transformation factor) leading to a higher mean square load voltage and power. Nonlinear EMEHs with a Duffing spring and with linear plus cubic damping are modeled using the method of statistical linearization. These nonlinear EMEHs exhibit approximately linear behaviour under low levels of broadband stationary Gaussian random vibration; however, at higher levels of such excitation the central (resonant) frequency of the spectral density of the output voltage shifts due to the increased nonlinear stiffness and the bandwidth broadens slightly. Nonlinear EMEHs exhibit lower maximum output voltage and central frequency of the spectral density with nonlinear damping compared to linear damping. Stronger nonlinear damping yields broader bandwidths at stable resonant frequency. PMID:24605063
Sorting processes with energy-constrained comparisons*
NASA Astrophysics Data System (ADS)
Geissmann, Barbara; Penna, Paolo
2018-05-01
We study very simple sorting algorithms based on a probabilistic comparator model. In this model, errors in comparing two elements are due to (1) the energy or effort put in the comparison and (2) the difference between the compared elements. Such algorithms repeatedly compare and swap pairs of randomly chosen elements, and they correspond to natural Markovian processes. The study of these Markov chains reveals an interesting phenomenon. Namely, in several cases, the algorithm that repeatedly compares only adjacent elements is better than the one making arbitrary comparisons: in the long-run, the former algorithm produces sequences that are "better sorted". The analysis of the underlying Markov chain poses interesting questions as the latter algorithm yields a nonreversible chain, and therefore its stationary distribution seems difficult to calculate explicitly. We nevertheless provide bounds on the stationary distributions and on the mixing time of these processes in several restrictions.
When push comes to shove: Exclusion processes with nonlocal consequences
NASA Astrophysics Data System (ADS)
Almet, Axel A.; Pan, Michael; Hughes, Barry D.; Landman, Kerry A.
2015-11-01
Stochastic agent-based models are useful for modelling collective movement of biological cells. Lattice-based random walk models of interacting agents where each site can be occupied by at most one agent are called simple exclusion processes. An alternative motility mechanism to simple exclusion is formulated, in which agents are granted more freedom to move under the compromise that interactions are no longer necessarily local. This mechanism is termed shoving. A nonlinear diffusion equation is derived for a single population of shoving agents using mean-field continuum approximations. A continuum model is also derived for a multispecies problem with interacting subpopulations, which either obey the shoving rules or the simple exclusion rules. Numerical solutions of the derived partial differential equations compare well with averaged simulation results for both the single species and multispecies processes in two dimensions, while some issues arise in one dimension for the multispecies case.
WAMS measurements pre-processing for detecting low-frequency oscillations in power systems
NASA Astrophysics Data System (ADS)
Kovalenko, P. Y.
2017-07-01
Processing the data received from measurement systems implies the situation when one or more registered values stand apart from the sample collection. These values are referred to as “outliers”. The processing results may be influenced significantly by the presence of those in the data sample under consideration. In order to ensure the accuracy of low-frequency oscillations detection in power systems the corresponding algorithm has been developed for the outliers detection and elimination. The algorithm is based on the concept of the irregular component of measurement signal. This component comprises measurement errors and is assumed to be Gauss-distributed random. The median filtering is employed to detect the values lying outside the range of the normally distributed measurement error on the basis of a 3σ criterion. The algorithm has been validated involving simulated signals and WAMS data as well.
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
ERIC Educational Resources Information Center
Felce, David; Perry, Jonathan
2004-01-01
Background: The aims were to: (i) explore the association between age and size of setting and staffing per resident; and (ii) report resident and setting characteristics, and indicators of service process and resident activity for a national random sample of staffed housing provision. Methods: Sixty settings were selected randomly from those…
Mohan, Deepika; Farris, Coreen; Fischhoff, Baruch; Rosengart, Matthew R; Angus, Derek C; Yealy, Donald M; Wallace, David J; Barnato, Amber E
2017-12-12
To determine whether a behavioral intervention delivered through a video game can improve the appropriateness of trauma triage decisions in the emergency department of non-trauma centers. Randomized clinical trial. Online intervention in national sample of emergency medicine physicians who make triage decisions at US hospitals. 368 emergency medicine physicians primarily working at non-trauma centers. A random sample (n=200) of those with primary outcome data was reassessed at six months. Physicians were randomized in a 1:1 ratio to one hour of exposure to an adventure video game (Night Shift) or apps based on traditional didactic education (myATLS and Trauma Life Support MCQ Review), both on iPads. Night Shift was developed to recalibrate the process of using pattern recognition to recognize moderate-severe injuries (representativeness heuristics) through the use of stories to promote behavior change (narrative engagement). Physicians were randomized with a 2×2 factorial design to intervention (game v traditional education apps) and then to the experimental condition under which they completed the outcome assessment tool (low v high cognitive load). Blinding could not be maintained after allocation but group assignment was masked during the analysis phase. Outcomes of a virtual simulation that included 10 cases; in four of these the patients had severe injuries. Participants completed the simulation within four weeks of their intervention. Decisions to admit, discharge, or transfer were measured. The proportion of patients under-triaged (patients with severe injuries not transferred to a trauma center) was calculated then (primary outcome) and again six months later, with a different set of cases (primary outcome of follow-up study). The secondary outcome was effect of cognitive load on under-triage. 149 (81%) physicians in the game arm and 148 (80%) in the traditional education arm completed the trial. Of these, 64/100 (64%) and 58/100 (58%), respectively, completed reassessment at six months. The mean age was 40 (SD 8.9), 283 (96%) were trained in emergency medicine, and 207 (70%) were ATLS (advanced trauma life support) certified. Physicians exposed to the game under-triaged fewer severely injured patients than those exposed to didactic education (316/596 (0.53) v 377/592 (0.64), estimated difference 0.11, 95% confidence interval 0.05 to 0.16; P<0.001). Cognitive load did not influence under-triage (161/308 (0.53) v 155/288 (0.54) in the game arm; 197/300 (0.66) v 180/292 (0.62) in the traditional educational apps arm; P=0.66). At six months, physicians exposed to the game remained less likely to under-triage patients (146/256 (0.57) v 172/232 (0.74), estimated difference 0.17, 0.09 to 0.25; P<0.001). No physician reported side effects. The sample might not reflect all emergency medicine physicians, and a small set of cases was used to assess performance. Compared with apps based on traditional didactic education, exposure of physicians to a theoretically grounded video game improved triage decision making in a validated virtual simulation. Though the observed effect was large, the wide confidence intervals include the possibility of a small benefit, and the real world efficacy of this intervention remains uncertain. clinicaltrials.gov; NCT02857348 (initial study)/NCT03138304 (follow-up). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Chemical Continuous Time Random Walks
NASA Astrophysics Data System (ADS)
Aquino, T.; Dentz, M.
2017-12-01
Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.
Conductivity of Nanowire Arrays under Random and Ordered Orientation Configurations
Jagota, Milind; Tansu, Nelson
2015-01-01
A computational model was developed to analyze electrical conductivity of random metal nanowire networks. It was demonstrated for the first time through use of this model that a performance gain in random metal nanowire networks can be achieved by slightly restricting nanowire orientation. It was furthermore shown that heavily ordered configurations do not outperform configurations with some degree of randomness; randomness in the case of metal nanowire orientations acts to increase conductivity. PMID:25976936
Intermittency and random matrices
NASA Astrophysics Data System (ADS)
Sokoloff, Dmitry; Illarionov, E. A.
2015-08-01
A spectacular phenomenon of intermittency, i.e. a progressive growth of higher statistical moments of a physical field excited by an instability in a random medium, attracted the attention of Zeldovich in the last years of his life. At that time, the mathematical aspects underlying the physical description of this phenomenon were still under development and relations between various findings in the field remained obscure. Contemporary results from the theory of the product of independent random matrices (the Furstenberg theory) allowed the elaboration of the phenomenon of intermittency in a systematic way. We consider applications of the Furstenberg theory to some problems in cosmology and dynamo theory.
Two-time scale subordination in physical processes with long-term memory
NASA Astrophysics Data System (ADS)
Stanislavsky, Aleksander; Weron, Karina
2008-03-01
We describe dynamical processes in continuous media with a long-term memory. Our consideration is based on a stochastic subordination idea and concerns two physical examples in detail. First we study a temporal evolution of the species concentration in a trapping reaction in which a diffusing reactant is surrounded by a sea of randomly moving traps. The analysis uses the random-variable formalism of anomalous diffusive processes. We find that the empirical trapping-reaction law, according to which the reactant concentration decreases in time as a product of an exponential and a stretched exponential function, can be explained by a two-time scale subordination of random processes. Another example is connected with a state equation for continuous media with memory. If the pressure and the density of a medium are subordinated in two different random processes, then the ordinary state equation becomes fractional with two-time scales. This allows one to arrive at the Bagley-Torvik type of state equation.
NASA Astrophysics Data System (ADS)
Xueju, Shen; Chao, Lin; Xiao, Zou; Jianjun, Cai
2015-05-01
We present a nonlinear optical cryptosystem with multi-dimensional keys including phase, polarization and diffraction distance. To make full use of the degrees of freedom that optical processing offers, an elaborately designed vector wave with both a space-variant phase and locally linear polarization is generated with a common-path interferometer for illumination. The joint transform correlator in the Fresnel domain, implemented with a double optical wedge, is utilized as the encryption framework which provides an additional key known as the Fresnel diffraction distance. Two nonlinear operations imposed on the recorded joint Fresnel power distribution (JFPD) by a charge coupled device (CCD) are adopted. The first one is the division of power distribution of the reference window random function which is previously proposed by researchers and can improve the quality of the decrypted image. The second one is the recording of a hybrid JFPD using a micro-polarizers array with orthogonal and random transmissive axes attached to the CCD. Then the hybrid JFPD is further scrambled by substituting random noise for partial power distribution. The two nonlinear operations break the linearity of this cryptosystem and provide ultra security. We verify our proposal using a quick response code for noise-free recovery.
NASA Astrophysics Data System (ADS)
Zhou, Shuwei; Xia, Caichu; Zhou, Yu
2018-06-01
Cracks have a significant effect on the uniaxial compression of rocks. Thus, a theoretically analytical approach was proposed to assess the effects of randomly distributed cracks on the effective Young’s modulus during the uniaxial compression of rocks. Each stage of the rock failure during uniaxial compression was analyzed and classified. The analytical approach for the effective Young’s modulus of a rock with only a single crack was derived while considering the three crack states under stress, namely, opening, closure-sliding, and closure-nonsliding. The rock was then assumed to have many cracks with randomly distributed direction, and the effect of crack shape and number during each stage of the uniaxial compression on the effective Young’s modulus was considered. Thus, the approach for the effective Young’s modulus was used to obtain the whole stress-strain process of uniaxial compression. Afterward, the proposed approach was employed to analyze the effects of related parameters on the whole stress-stain curve. The proposed approach was eventually compared with some existing rock tests to validate its applicability and feasibility. The proposed approach has clear physical meaning and shows favorable agreement with the rock test results.
Novel sonar signal processing tool using Shannon entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quazi, A.H.
1996-06-01
Traditionally, conventional signal processing extracts information from sonar signals using amplitude, signal energy or frequency domain quantities obtained using spectral analysis techniques. The object is to investigate an alternate approach which is entirely different than that of traditional signal processing. This alternate approach is to utilize the Shannon entropy as a tool for the processing of sonar signals with emphasis on detection, classification, and localization leading to superior sonar system performance. Traditionally, sonar signals are processed coherently, semi-coherently, and incoherently, depending upon the a priori knowledge of the signals and noise. Here, the detection, classification, and localization technique will bemore » based on the concept of the entropy of the random process. Under a constant energy constraint, the entropy of a received process bearing finite number of sample points is maximum when hypothesis H{sub 0} (that the received process consists of noise alone) is true and decreases when correlated signal is present (H{sub 1}). Therefore, the strategy used for detection is: (I) Calculate the entropy of the received data; then, (II) compare the entropy with the maximum value; and, finally, (III) make decision: H{sub 1} is assumed if the difference is large compared to pre-assigned threshold and H{sub 0} is otherwise assumed. The test statistics will be different between entropies under H{sub 0} and H{sub 1}. Here, we shall show the simulated results for detecting stationary and non-stationary signals in noise, and results on detection of defects in a Plexiglas bar using an ultrasonic experiment conducted by Hughes. {copyright} {ital 1996 American Institute of Physics.}« less
Theory and generation of conditional, scalable sub-Gaussian random fields
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2016-03-01
Many earth and environmental (as well as a host of other) variables, Y, and their spatial (or temporal) increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture key aspects of such non-Gaussian scaling by treating Y and/or ΔY as sub-Gaussian random fields (or processes). This however left unaddressed the empirical finding that whereas sample frequency distributions of Y tend to display relatively mild non-Gaussian peaks and tails, those of ΔY often reveal peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we proposed a generalized sub-Gaussian model (GSG) which resolves this apparent inconsistency between the statistical scaling behaviors of observed variables and their increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. Most importantly, we demonstrated the feasibility of estimating all parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments, ΔY. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random fields, introduce two approximate versions of this algorithm to reduce CPU time, and explore them on one and two-dimensional synthetic test cases.
NASA Astrophysics Data System (ADS)
Vanmarcke, Erik
1983-03-01
Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.
Random numbers certified by Bell's theorem.
Pironio, S; Acín, A; Massar, S; de la Giroday, A Boyer; Matsukevich, D N; Maunz, P; Olmschenk, S; Hayes, D; Luo, L; Manning, T A; Monroe, C
2010-04-15
Randomness is a fundamental feature of nature and a valuable resource for applications ranging from cryptography and gambling to numerical simulation of physical and biological systems. Random numbers, however, are difficult to characterize mathematically, and their generation must rely on an unpredictable physical process. Inaccuracies in the theoretical modelling of such processes or failures of the devices, possibly due to adversarial attacks, limit the reliability of random number generators in ways that are difficult to control and detect. Here, inspired by earlier work on non-locality-based and device-independent quantum information processing, we show that the non-local correlations of entangled quantum particles can be used to certify the presence of genuine randomness. It is thereby possible to design a cryptographically secure random number generator that does not require any assumption about the internal working of the device. Such a strong form of randomness generation is impossible classically and possible in quantum systems only if certified by a Bell inequality violation. We carry out a proof-of-concept demonstration of this proposal in a system of two entangled atoms separated by approximately one metre. The observed Bell inequality violation, featuring near perfect detection efficiency, guarantees that 42 new random numbers are generated with 99 per cent confidence. Our results lay the groundwork for future device-independent quantum information experiments and for addressing fundamental issues raised by the intrinsic randomness of quantum theory.
ERIC Educational Resources Information Center
Schweig, Jonathan David; Pane, John F.
2016-01-01
Demands for scientific knowledge of what works in educational policy and practice has driven interest in quantitative investigations of educational outcomes, and randomized controlled trials (RCTs) have proliferated under these conditions. In educational settings, even when individuals are randomized, both experimental and control students are…
Zhang, Ling; Liu, Shuming; Liu, Wenjun
2014-02-01
Polymeric pipes, such as unplasticized polyvinyl chloride (uPVC) pipes, polypropylene random (PPR) pipes and polyethylene (PE) pipes are increasingly used for drinking water distribution lines. Plastic pipes may include some additives like metallic stabilizers and other antioxidants for the protection of the material during its production and use. Thus, some compounds can be released from those plastic pipes and cast a shadow on drinking water quality. This work develops a new procedure to investigate three types of polymer pipes (uPVC, PE and PPR) with respect to the migration of total organic carbon (TOC) into drinking water. The migration test was carried out in stagnant conditions with two types of migration processes, a continuous migration process and a successive migration process. These two types of migration processes are specially designed to mimic the conditions of different flow manners in drinking water pipelines, i.e., the situation of continuous stagnation with long hydraulic retention times and normal flow status with regular water renewing in drinking water networks. The experimental results showed that TOC release differed significantly with different plastic materials and under different flow manners. The order of materials with respect to the total amount of TOC migrating into drinking water was observed as PE > PPR > uPVC under both successive and continuous migration conditions. A higher amount of organic migration from PE and PPR pipes was likely to occur due to more organic antioxidants being used in pipe production. The results from the successive migration tests indicated the trend of the migration intensity of different pipe materials over time, while the results obtained from the continuous migration tests implied that under long stagnant conditions, the drinking water quality could deteriorate quickly with the consistent migration of organic compounds and the dramatic consumption of chlorine to a very low level. Higher amounts of TOC were released under the continuous migration tests.
Solution-Processed Carbon Nanotube True Random Number Generator.
Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C
2017-08-09
With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.
NASA Technical Reports Server (NTRS)
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
Nanotip Carpets as Antireflection Surfaces
NASA Technical Reports Server (NTRS)
Bae, Youngsam; Mobasser, Sohrab; Manohara, Harish; Lee, Choonsup
2008-01-01
Carpet-like random arrays of metal-coated silicon nanotips have been shown to be effective as antireflection surfaces. Now undergoing development for incorporation into Sun sensors that would provide guidance for robotic exploratory vehicles on Mars, nanotip carpets of this type could also have many uses on Earth as antireflection surfaces in instruments that handle or detect ultraviolet, visible, or infrared light. In the original Sun-sensor application, what is required is an array of 50-micron-diameter apertures on what is otherwise an opaque, minimally reflective surface, as needed to implement a miniature multiple-pinhole camera. The process for fabrication of an antireflection nanotip carpet for this application (see Figure 1) includes, and goes somewhat beyond, the process described in A New Process for Fabricating Random Silicon Nanotips (NPO-40123), NASA Tech Briefs, Vol. 28, No. 1 (November 2004), page 62. In the first step, which is not part of the previously reported process, photolithography is performed to deposit etch masks to define the 50-micron apertures on a silicon substrate. In the second step, which is part of the previously reported process, the non-masked silicon area between the apertures is subjected to reactive ion etching (RIE) under a special combination of conditions that results in the growth of fluorine-based compounds in randomly distributed formations, known in the art as "polymer RIE grass," that have dimensions of the order of microns. The polymer RIE grass formations serve as microscopic etch masks during the next step, in which deep reactive ion etching (DRIE) is performed. What remains after DRIE is the carpet of nano - tips, which are high-aspect-ratio peaks, the tips of which have radii of the order of nanometers. Next, the nanotip array is evaporatively coated with Cr/Au to enhance the absorption of light (more specifically, infrared light in the Sun-sensor application). The photoresist etch masks protecting the apertures are then removed by dipping the substrate into acetone. Finally, for the Sun-sensor application, the back surface of the substrate is coated with a 57-nm-thick layer of Cr for attenuation of sunlight.
NASA Astrophysics Data System (ADS)
Nigmatullin, R.; Rakhmatullin, R.
2014-12-01
Many experimentalists were accustomed to think that any independent measurement forms a non-correlated measurement that depends weakly from others. We are trying to reconsider this conventional point of view and prove that similar measurements form a strongly-correlated sequence of random functions with memory. In other words, successive measurements "remember" each other at least their nearest neighbors. This observation and justification on real data help to fit the wide set of data based on the Prony's function. The Prony's decomposition follows from the quasi-periodic (QP) properties of the measured functions and includes the Fourier transform as a partial case. New type of decomposition helps to obtain a specific amplitude-frequency response (AFR) of the measured (random) functions analyzed and each random function contains less number of the fitting parameters in comparison with its number of initial data points. Actually, the calculated AFR can be considered as the generalized Prony's spectrum (GPS), which will be extremely useful in cases where the simple model pretending on description of the measured data is absent but vital necessity of their quantitative description is remained. These possibilities open a new way for clusterization of the initial data and new information that is contained in these data gives a chance for their detailed analysis. The electron paramagnetic resonance (EPR) measurements realized for empty resonator (pure noise data) and resonator containing a sample (CeO2 in our case) confirmed the existence of the QP processes in reality. But we think that the detection of the QP processes is a common feature of many repeated measurements and this new property of successive measurements can attract an attention of many experimentalists. To formulate some general conditions that help to identify and then detect the presence of some QP process in the repeated experimental measurements. To find a functional equation and its solution that yields the description of the identified QP process. To suggest some computing algorithm for fitting of the QP data to the analytical function that follows from the solution of the corresponding functional equation. The content of this paper is organized as follows. In the Section 2 we will try to find the answers on the problem posed in this introductory section. It contains also the mathematical description of the QP process and interpretation of the meaning of the generalized Prony's spectrum (GPS). The GPS includes the conventional Fourier decomposition as a partial case. Section 3 contains the experimental details associated with receiving of the desired data. Section 4 includes some important details explaining specific features of application of general algorithm to concrete data. In Section 5 we summarize the results and outline the perspectives of this approach for quantitative description of time-dependent random data that are registered in different complex systems and experimental devices. Here we should notice that under the complex system we imply a system when a conventional model is absent[6]. Under simplicity of the acceptable model we imply the proper hypothesis ("best fit" model) containing minimal number of the fitting parameters that describes the behavior of the system considered quantitatively. The different approaches that exist in nowadays for description of these systems are collected in the recent review [7].
On joint subtree distributions under two evolutionary models.
Wu, Taoyang; Choi, Kwok Pui
2016-04-01
In population and evolutionary biology, hypotheses about micro-evolutionary and macro-evolutionary processes are commonly tested by comparing the shape indices of empirical evolutionary trees with those predicted by neutral models. A key ingredient in this approach is the ability to compute and quantify distributions of various tree shape indices under random models of interest. As a step to meet this challenge, in this paper we investigate the joint distribution of cherries and pitchforks (that is, subtrees with two and three leaves) under two widely used null models: the Yule-Harding-Kingman (YHK) model and the proportional to distinguishable arrangements (PDA) model. Based on two novel recursive formulae, we propose a dynamic approach to numerically compute the exact joint distribution (and hence the marginal distributions) for trees of any size. We also obtained insights into the statistical properties of trees generated under these two models, including a constant correlation between the cherry and the pitchfork distributions under the YHK model, and the log-concavity and unimodality of the cherry distributions under both models. In addition, we show that there exists a unique change point for the cherry distributions between these two models. Copyright © 2015 Elsevier Inc. All rights reserved.
Tonguet-Papucci, Audrey; Huybregts, Lieven; Ait Aissa, Myriam; Huneau, Jean-François; Kolsteren, Patrick
2015-08-08
Wasting is a public health issue but evidence gaps remain concerning preventive strategies not primarily based on food products. Cash transfers, as part of safety net approach, have potential to prevent under-nutrition. However, most of the cash transfer programs implemented and scientifically evaluated do not have a clear nutritional objective, which leads to a lack of evidence regarding their nutritional benefits. The MAM'Out research project aims at evaluating a seasonal and multiannual cash transfer program to prevent acute malnutrition in children under 36 months, in terms of effectiveness and cost-effectiveness in the Tapoa province (Eastern region of Burkina Faso, Africa). The program is targeted to economically vulnerable households with children less than 1 year old at the time of inclusion. Cash is distributed to mothers and the transfers are unconditional, leading to beneficiaries' self-determination on the use of cash. The study is designed as a two-arm cluster randomized intervention trial, based on the randomization of rural villages. One group receives cash transfers via mobile phones and one is a control group. The main outcomes are the cumulative incidence of acute malnutrition and the cost-effectiveness. Child anthropometry (height, weight and MUAC) is followed, as well as indicators related to dietary diversity, food security, health center utilization, families' expenses, women empowerment and morbidities. 24 h-food recalls are also carried out. Individual interviews and focus group discussions allow collecting qualitative data. Finally, based on a theory framework built a priori, the pathways used by the cash to have an effect on the prevention of under-nutrition will be assessed. The design chosen will lead to a robust assessment of the effectiveness of the proposed intervention. Several challenges appeared while implementing the study and discrepancies with the research protocol, mainly due to unforeseen events, can be highlighted, such as delay in project implementation, switch to e-data collection and implementation of a supervision process. ClinicalTrials.gov, identifier: NCT01866124, registered May 7, 2013.
Viability of the Alaskan breeding population of Steller’s eiders
Dunham, Kylee; Grand, James B.
2016-10-11
The U.S. Fish and Wildlife Service is tasked with setting objective and measurable criteria for delisting species or populations listed under the Endangered Species Act. Determining the acceptable threshold for extinction risk for any species or population is a challenging task, particularly when facing marked uncertainty. The Alaskan breeding population of Steller’s eiders (Polysticta stelleri) was listed as threatened under the Endangered Species Act in 1997 because of a perceived decline in abundance throughout their nesting range and geographic isolation from the Russian breeding population. Previous genetic studies and modeling efforts, however, suggest that there may be dispersal from the Russian breeding population. Additionally, evidence exists of population level nonbreeding events. Research was conducted to estimate population viability of the Alaskan breeding population of Steller’s eiders, using both an open and closed model of population process for this threatened population. Projections under a closed population model suggest this population has a 100 percent probability of extinction within 42 years. Projections under an open population model suggest that with immigration there is no probability of permanent extinction. Because of random immigration process and nonbreeding behavior, however, it is likely that this population will continue to be present in low and highly variable numbers on the breeding grounds in Alaska. Monitoring the winter population, which includes both Russian and Alaskan breeding birds, may offer a more comprehensive indication of population viability.
Stochastic arbitrage return and its implication for option pricing
NASA Astrophysics Data System (ADS)
Fedotov, Sergei; Panayides, Stephanos
2005-01-01
The purpose of this work is to explore the role that random arbitrage opportunities play in pricing financial derivatives. We use a non-equilibrium model to set up a stochastic portfolio, and for the random arbitrage return, we choose a stationary ergodic random process rapidly varying in time. We exploit the fact that option price and random arbitrage returns change on different time scales which allows us to develop an asymptotic pricing theory involving the central limit theorem for random processes. We restrict ourselves to finding pricing bands for options rather than exact prices. The resulting pricing bands are shown to be independent of the detailed statistical characteristics of the arbitrage return. We find that the volatility “smile” can also be explained in terms of random arbitrage opportunities.
Noise, chaos, and (ɛ, τ)-entropy per unit time
NASA Astrophysics Data System (ADS)
Gaspard, Pierre; Wang, Xiao-Jing
1993-12-01
The degree of dynamical randomness of different time processes is characterized in terms of the (ε, τ)-entropy per unit time. The (ε, τ)-entropy is the amount of information generated per unit time, at different scales τ of time and ε of the observables. This quantity generalizes the Kolmogorov-Sinai entropy per unit time from deterministic chaotic processes, to stochastic processes such as fluctuations in mesoscopic physico-chemical phenomena or strong turbulence in macroscopic spacetime dynamics. The random processes that are characterized include chaotic systems, Bernoulli and Markov chains, Poisson and birth-and-death processes, Ornstein-Uhlenbeck and Yaglom noises, fractional Brownian motions, different regimes of hydrodynamical turbulence, and the Lorentz-Boltzmann process of nonequilibrium statistical mechanics. We also extend the (ε, τ)-entropy to spacetime processes like cellular automata, Conway's game of life, lattice gas automata, coupled maps, spacetime chaos in partial differential equations, as well as the ideal, the Lorentz, and the hard sphere gases. Through these examples it is demonstrated that the (ε, τ)-entropy provides a unified quantitative measure of dynamical randomness to both chaos and noises, and a method to detect transitions between dynamical states of different degrees of randomness as a parameter of the system is varied.
Laser-Aided Directed Energy Deposition of Steel Powder over Flat Surfaces and Edges.
Caiazzo, Fabrizia; Alfieri, Vittorio
2018-03-16
In the framework of Additive Manufacturing of metals, Directed Energy Deposition of steel powder over flat surfaces and edges has been investigated in this paper. The aims are the repair and overhaul of actual, worn-out, high price sensitive metal components. A full-factorial experimental plan has been arranged, the results have been discussed in terms of geometry, microhardness and thermal affection as functions of the main governing parameters, laser power, scanning speed and mass flow rate; dilution and catching efficiency have been evaluated as well to compare quality and effectiveness of the process under conditions of both flat and edge depositions. Convincing results are presented to give grounds for shifting the process to actual applications: namely, no cracks or pores have been found in random cross-sections of the samples in the processing window. Interestingly an effect of the scanning conditions has been proven on the resulting hardness in the fusion zone; therefore, the mechanical characteristics are expected to depend on the processing parameters.
Laser-Aided Directed Energy Deposition of Steel Powder over Flat Surfaces and Edges
2018-01-01
In the framework of Additive Manufacturing of metals, Directed Energy Deposition of steel powder over flat surfaces and edges has been investigated in this paper. The aims are the repair and overhaul of actual, worn-out, high price sensitive metal components. A full-factorial experimental plan has been arranged, the results have been discussed in terms of geometry, microhardness and thermal affection as functions of the main governing parameters, laser power, scanning speed and mass flow rate; dilution and catching efficiency have been evaluated as well to compare quality and effectiveness of the process under conditions of both flat and edge depositions. Convincing results are presented to give grounds for shifting the process to actual applications: namely, no cracks or pores have been found in random cross-sections of the samples in the processing window. Interestingly an effect of the scanning conditions has been proven on the resulting hardness in the fusion zone; therefore, the mechanical characteristics are expected to depend on the processing parameters. PMID:29547571
Phase separation like dynamics during Myxococcus xanthus fruiting body formation
NASA Astrophysics Data System (ADS)
Liu, Guannan; Thutupalli, Shashi; Wigbers, Manon; Shaevitz, Joshua
2015-03-01
Collective motion exists in many living organisms as an advantageous strategy to help the entire group with predation, forage, and survival. However, the principles of self-organization underlying such collective motions remain unclear. During various developmental stages of the soil-dwelling bacterium, Myxococcus xanthus, different types of collective motions are observed. In particular, when starved, M. xanthus cells eventually aggregate together to form 3-dimensional structures (fruiting bodies), inside which cells sporulate in response to the stress. We study the fruiting body formation process as an out of equilibrium phase separation process. As local cell density increases, the dynamics of the aggregation M. xanthus cells switch from a spatio-temporally random process, resembling nucleation and growth, to an emergent pattern formation process similar to a spinodal decomposition. By employing high-resolution microscopy and a video analysis system, we are able to track the motion of single cells within motile collective groups, while separately tuning local cell density, cell velocity and reversal frequency, probing the multi-dimensional phase space of M. xanthus development.
Developing and executing quality improvement projects (concept, methods, and evaluation).
Likosky, Donald S
2014-03-01
Continuous quality improvement, quality assurance, cycles of change--these words of often used to express the process of using data to inform and improve clinical care. Although many of us have been exposed to theories and practice of experimental work (e.g., randomized trial), few of us have been similarly exposed to the science underlying quality improvement. Through the lens of a single-center quality improvement study, this article exposes the reader to methodology for conducting such studies. The reader will gain an understanding of these methods required to embark on such a study.
A novel Bayesian approach to acoustic emission data analysis.
Agletdinov, E; Pomponi, E; Merson, D; Vinogradov, A
2016-12-01
Acoustic emission (AE) technique is a popular tool for materials characterization and non-destructive testing. Originating from the stochastic motion of defects in solids, AE is a random process by nature. The challenging problem arises whenever an attempt is made to identify specific points corresponding to the changes in the trends in the fluctuating AE time series. A general Bayesian framework is proposed for the analysis of AE time series, aiming at automated finding the breakpoints signaling a crossover in the dynamics of underlying AE sources. Copyright © 2016 Elsevier B.V. All rights reserved.
Lin, Huijuan; Li, Li; Ren, Jing; Cai, Zhenbo; Qiu, Longbin; Yang, Zhibin; Peng, Huisheng
2013-01-01
Polyaniline composite films incorporated with aligned multi-walled carbon nanotubes (MWCNTs) are synthesized through an easy electrodeposition process. These robust and electrically conductive films are found to function as effective electrodes to fabricate transparent and flexible supercapacitors with a maximum specific capacitance of 233 F/g at a current density of 1 A/g. It is 36 times of bare MWCNT sheet, 23 times of pure polyaniline and 3 times of randomly dispersed MWCNT/polyaniline film under the same conditions. The novel supercapacitors also show a high cyclic stability. PMID:23443325
How to measure a-few-nanometer-small LER occurring in EUV lithography processed feature
NASA Astrophysics Data System (ADS)
Kawada, Hiroki; Kawasaki, Takahiro; Kakuta, Junichi; Ikota, Masami; Kondo, Tsuyoshi
2018-03-01
For EUV lithography features we want to decrease the dose and/or energy of CD-SEM's probe beam because LER decreases with severe resist-material's shrink. Under such conditions, however, measured LER increases from true LER, due to LER bias that is fake LER caused by random noise in SEM image. A gap error occurs between the right and the left LERs. In this work we propose new procedures to obtain true LER by excluding the LER bias from the measured LER. To verify it we propose a LER's reference-metrology using TEM.
NASA Astrophysics Data System (ADS)
Laptev, A. G.; Basharov, M. M.; Farakhova, A. I.
2013-09-01
The process through which small droplets contained in emulsions are physically coagulated on the surface of random packing elements is considered. The theory of turbulent migration of a finely dispersed phase is used for determining the coagulation efficiency. Expressions for calculating coagulation efficiency and turbulent transfer rate are obtained by applying models of a turbulent boundary layer. An example of calculating the enlargement of water droplets in hydrocarbon medium represented by a wide fraction of light hydrocarbons (also known as natural gas liquid) is given. The process flowchart of a system for removing petroleum products from effluent waters discharged from the Kazan TETs-1 cogeneration station is considered. Replacement of the mechanical filter by a thin-layer settler with a coagulator is proposed.
Renyi entropy measures of heart rate Gaussianity.
Lake, Douglas E
2006-01-01
Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.
Money creation process in a random redistribution model
NASA Astrophysics Data System (ADS)
Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan
2014-01-01
In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.
Wolfe, Edward W; McGill, Michael T
2011-01-01
This article summarizes a simulation study of the performance of five item quality indicators (the weighted and unweighted versions of the mean square and standardized mean square fit indices and the point-measure correlation) under conditions of relatively high and low amounts of missing data under both random and conditional patterns of missing data for testing contexts such as those encountered in operational administrations of a computerized adaptive certification or licensure examination. The results suggest that weighted fit indices, particularly the standardized mean square index, and the point-measure correlation provide the most consistent information between random and conditional missing data patterns and that these indices perform more comparably for items near the passing score than for items with extreme difficulty values.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence
Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria
2016-01-01
To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682
A non-equilibrium neutral model for analysing cultural change.
Kandler, Anne; Shennan, Stephen
2013-08-07
Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pan, Yue; Cai, Yimao; Liu, Yefan; Fang, Yichen; Yu, Muxi; Tan, Shenghu; Huang, Ru
2016-04-01
TaOx-based resistive random access memory (RRAM) attracts considerable attention for the development of next generation nonvolatile memories. However, read current noise in RRAM is one of the critical concerns for storage application, and its microscopic origin is still under debate. In this work, the read current noise in TaOx-based RRAM was studied thoroughly. Based on a noise power spectral density analysis at room temperature and at ultra-low temperature of 25 K, discrete random telegraph noise (RTN) and continuous average current fluctuation (ACF) are identified and decoupled from the total read current noise in TaOx RRAM devices. A statistical comparison of noise amplitude further reveals that ACF depends strongly on the temperature, whereas RTN is independent of the temperature. Measurement results combined with conduction mechanism analysis show that RTN in TaOx RRAM devices arises from electron trapping/detrapping process in the hopping conduction, and ACF is originated from the thermal activation of conduction centers that form the percolation network. At last, a unified model in the framework of hopping conduction is proposed to explain the underlying mechanism of both RTN and ACF noise, which can provide meaningful guidelines for designing noise-immune RRAM devices.
NASA Astrophysics Data System (ADS)
Kenfack, Lionel Tenemeza; Tchoffo, Martin; Fai, Lukong Cornelius
2017-02-01
We address the dynamics of quantum correlations, including entanglement and quantum discord of a three-qubit system interacting with a classical pure dephasing random telegraph noise (RTN) in three different physical environmental situations (independent, mixed and common environments). Two initial entangled states of the system are examined, namely the Greenberger-Horne-Zeilinger (GHZ)- and Werner (W)-type states. The classical noise is introduced as a stochastic process affecting the energy splitting of the qubits. With the help of suitable measures of tripartite entanglement (entanglement witnesses and lower bound of concurrence) and quantum discord (global quantum discord and quantum dissension), we show that the evolution of quantum correlations is not only affected by the type of the system-environment interaction but also by the input configuration of the qubits and the memory properties of the environmental noise. Indeed, depending on the memory properties of the environmental noise and the initial state considered, we find that independent, common and mixed environments can play opposite roles in preserving quantum correlations, and that the sudden death and revival phenomena or the survival of quantum correlations may occur. On the other hand, we also show that the W-type state has strong dynamics under this noise than the GHZ-type ones.
Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence.
Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D; Chait, Maria
2016-09-01
To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis." © The Author 2016. Published by Oxford University Press.
Chen, Bor-Sen; Yeh, Chin-Hsun
2017-12-01
We review current static and dynamic evolutionary game strategies of biological networks and discuss the lack of random genetic variations and stochastic environmental disturbances in these models. To include these factors, a population of evolving biological networks is modeled as a nonlinear stochastic biological system with Poisson-driven genetic variations and random environmental fluctuations (stimuli). To gain insight into the evolutionary game theory of stochastic biological networks under natural selection, the phenotypic robustness and network evolvability of noncooperative and cooperative evolutionary game strategies are discussed from a stochastic Nash game perspective. The noncooperative strategy can be transformed into an equivalent multi-objective optimization problem and is shown to display significantly improved network robustness to tolerate genetic variations and buffer environmental disturbances, maintaining phenotypic traits for longer than the cooperative strategy. However, the noncooperative case requires greater effort and more compromises between partly conflicting players. Global linearization is used to simplify the problem of solving nonlinear stochastic evolutionary games. Finally, a simple stochastic evolutionary model of a metabolic pathway is simulated to illustrate the procedure of solving for two evolutionary game strategies and to confirm and compare their respective characteristics in the evolutionary process. Copyright © 2017 Elsevier B.V. All rights reserved.
Wuest, Simon L; Richard, Stéphane; Kopp, Sascha; Grimm, Daniela; Egli, Marcel
2015-01-01
Random Positioning Machines (RPMs) have been used since many years as a ground-based model to simulate microgravity. In this review we discuss several aspects of the RPM. Recent technological development has expanded the operative range of the RPM substantially. New possibilities of live cell imaging and partial gravity simulations, for example, are of particular interest. For obtaining valuable and reliable results from RPM experiments, the appropriate use of the RPM is of utmost importance. The simulation of microgravity requires that the RPM's rotation is faster than the biological process under study, but not so fast that undesired side effects appear. It remains a legitimate question, however, whether the RPM can accurately and reliably simulate microgravity conditions comparable to real microgravity in space. We attempt to answer this question by mathematically analyzing the forces working on the samples while they are mounted on the operating RPM and by comparing data obtained under real microgravity in space and simulated microgravity on the RPM. In conclusion and after taking the mentioned constraints into consideration, we are convinced that simulated microgravity experiments on the RPM are a valid alternative for conducting examinations on the influence of the force of gravity in a fast and straightforward approach.
Wuest, Simon L.; Richard, Stéphane; Kopp, Sascha
2015-01-01
Random Positioning Machines (RPMs) have been used since many years as a ground-based model to simulate microgravity. In this review we discuss several aspects of the RPM. Recent technological development has expanded the operative range of the RPM substantially. New possibilities of live cell imaging and partial gravity simulations, for example, are of particular interest. For obtaining valuable and reliable results from RPM experiments, the appropriate use of the RPM is of utmost importance. The simulation of microgravity requires that the RPM's rotation is faster than the biological process under study, but not so fast that undesired side effects appear. It remains a legitimate question, however, whether the RPM can accurately and reliably simulate microgravity conditions comparable to real microgravity in space. We attempt to answer this question by mathematically analyzing the forces working on the samples while they are mounted on the operating RPM and by comparing data obtained under real microgravity in space and simulated microgravity on the RPM. In conclusion and after taking the mentioned constraints into consideration, we are convinced that simulated microgravity experiments on the RPM are a valid alternative for conducting examinations on the influence of the force of gravity in a fast and straightforward approach. PMID:25649075
Analysis of dynamic system response to product random processes
NASA Technical Reports Server (NTRS)
Sidwell, K.
1978-01-01
The response of dynamic systems to the product of two independent Gaussian random processes is developed by use of the Fokker-Planck and associated moment equations. The development is applied to the amplitude modulated process which is used to model atmospheric turbulence in aeronautical applications. The exact solution for the system response is compared with the solution obtained by the quasi-steady approximation which omits the dynamic properties of the random amplitude modulation. The quasi-steady approximation is valid as a limiting case of the exact solution for the dynamic response of linear systems to amplitude modulated processes. In the nonlimiting case the quasi-steady approximation can be invalid for dynamic systems with low damping.
Groupies in multitype random graphs.
Shang, Yilun
2016-01-01
A groupie in a graph is a vertex whose degree is not less than the average degree of its neighbors. Under some mild conditions, we show that the proportion of groupies is very close to 1/2 in multitype random graphs (such as stochastic block models), which include Erdős-Rényi random graphs, random bipartite, and multipartite graphs as special examples. Numerical examples are provided to illustrate the theoretical results.
Jeon, Jae-Hyung; Chechkin, Aleksei V; Metzler, Ralf
2014-08-14
Anomalous diffusion is frequently described by scaled Brownian motion (SBM), a Gaussian process with a power-law time dependent diffusion coefficient. Its mean squared displacement is 〈x(2)(t)〉 ≃ 2K(t)t with K(t) ≃ t(α-1) for 0 < α < 2. SBM may provide a seemingly adequate description in the case of unbounded diffusion, for which its probability density function coincides with that of fractional Brownian motion. Here we show that free SBM is weakly non-ergodic but does not exhibit a significant amplitude scatter of the time averaged mean squared displacement. More severely, we demonstrate that under confinement, the dynamics encoded by SBM is fundamentally different from both fractional Brownian motion and continuous time random walks. SBM is highly non-stationary and cannot provide a physical description for particles in a thermalised stationary system. Our findings have direct impact on the modelling of single particle tracking experiments, in particular, under confinement inside cellular compartments or when optical tweezers tracking methods are used.
In silico evidence for sequence-dependent nucleosome sliding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lequieu, Joshua; Schwartz, David C.; de Pablo, Juan J.
Nucleosomes represent the basic building block of chromatin and provide an important mechanism by which cellular processes are controlled. The locations of nucleosomes across the genome are not random but instead depend on both the underlying DNA sequence and the dynamic action of other proteins within the nucleus. These processes are central to cellular function, and the molecular details of the interplay between DNA sequence and nudeosome dynamics remain poorly understood. In this work, we investigate this interplay in detail by relying on a molecular model, which permits development of a comprehensive picture of the underlying free energy surfaces andmore » the corresponding dynamics of nudeosome repositioning. The mechanism of nudeosome repositioning is shown to be strongly linked to DNA sequence and directly related to the binding energy of a given DNA sequence to the histone core. It is also demonstrated that chromatin remodelers can override DNA-sequence preferences by exerting torque, and the histone H4 tail is then identified as a key component by which DNA-sequence, histone modifications, and chromatin remodelers could in fact be coupled.« less
Fabrication and viscoelastic characteristics of waste tire rubber based magnetorheological elastomer
NASA Astrophysics Data System (ADS)
Ubaidillah; Choi, H. J.; Mazlan, S. A.; Imaduddin, F.; Harjana
2016-11-01
In this study, waste tire rubber (WTR) was successfully converted into magnetorheological (MR) elastomer via high-pressure and high-temperature reclamation. The physical and rheological properties of WTR based MR elastomers were assessed for performance. The revulcanization process was at the absence of magnetic fields. Thus, the magnetizable particles were allowed to distribute randomly. To confirm the particle dispersion in the MR elastomer matrix, an observation by scanning electron microscopy was used. The magnetization saturation and other magnetic properties were obtained through vibrating sample magnetometer. Rheological properties including MR effect were examined under oscillatory loadings in the absence and presence of magnetic fields using rotational rheometer. The WTR based MR elastomer exhibited tunable intrinsic properties under presentation of magnetic fields. The storage and loss modulus, along with the loss factor, changed with increases in frequency and during magnetization. Interestingly, a Payne effect phenomenon was seen in all samples during dynamic swept strain testing. The Payne effect was significantly increased with incremental increases in the magnetic field. This phenomenon was interpreted as the process of formation-destruction-reformation undergone by the internal network chains in the MR elastomers.
The Effectiveness of Mandatory-Random Student Drug Testing
ERIC Educational Resources Information Center
James-Burdumy, Susanne; Goesling, Brian; Deke, John; Einspruch, Eric
2011-01-01
One approach some U.S. schools now use to combat high rates of adolescent substance use is school-based mandatory-random student drug testing (MRSDT). Under MRSDT, students and their parents sign consent forms agreeing to the students' participation in random drug testing as a condition of participating in athletics and other school-sponsored…
Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L
2017-11-20
We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.
Football fever: goal distributions and non-Gaussian statistics
NASA Astrophysics Data System (ADS)
Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.
2009-02-01
Analyzing football score data with statistical techniques, we investigate how the not purely random, but highly co-operative nature of the game is reflected in averaged properties such as the probability distributions of scored goals for the home and away teams. As it turns out, especially the tails of the distributions are not well described by the Poissonian or binomial model resulting from the assumption of uncorrelated random events. Instead, a good effective description of the data is provided by less basic distributions such as the negative binomial one or the probability densities of extreme value statistics. To understand this behavior from a microscopical point of view, however, no waiting time problem or extremal process need be invoked. Instead, modifying the Bernoulli random process underlying the Poissonian model to include a simple component of self-affirmation seems to describe the data surprisingly well and allows to understand the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments, including data from all past tournaments of the “FIFA World Cup” series, and found the proposed models to be applicable rather universally. In particular, here we analyze the results of the German women’s premier football league and consider the two separate German men’s premier leagues in the East and West during the cold war times as well as the unified league after 1990 to see how scoring in football and the component of self-affirmation depend on cultural and political circumstances.
The non-equilibrium allele frequency spectrum in a Poisson random field framework.
Kaj, Ingemar; Mugal, Carina F
2016-10-01
In population genetic studies, the allele frequency spectrum (AFS) efficiently summarizes genome-wide polymorphism data and shapes a variety of allele frequency-based summary statistics. While existing theory typically features equilibrium conditions, emerging methodology requires an analytical understanding of the build-up of the allele frequencies over time. In this work, we use the framework of Poisson random fields to derive new representations of the non-equilibrium AFS for the case of a Wright-Fisher population model with selection. In our approach, the AFS is a scaling-limit of the expectation of a Poisson stochastic integral and the representation of the non-equilibrium AFS arises in terms of a fixation time probability distribution. The known duality between the Wright-Fisher diffusion process and a birth and death process generalizing Kingman's coalescent yields an additional representation. The results carry over to the setting of a random sample drawn from the population and provide the non-equilibrium behavior of sample statistics. Our findings are consistent with and extend a previous approach where the non-equilibrium AFS solves a partial differential forward equation with a non-traditional boundary condition. Moreover, we provide a bridge to previous coalescent-based work, and hence tie several frameworks together. Since frequency-based summary statistics are widely used in population genetics, for example, to identify candidate loci of adaptive evolution, to infer the demographic history of a population, or to improve our understanding of the underlying mechanics of speciation events, the presented results are potentially useful for a broad range of topics. Copyright © 2016 Elsevier Inc. All rights reserved.
Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.
Jiang, Z; Chen, W; Burkhart, C
2013-11-01
Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Some Minorants and Majorants of Random Walks and Levy Processes
NASA Astrophysics Data System (ADS)
Abramson, Joshua Simon
This thesis consists of four chapters, all relating to some sort of minorant or majorant of random walks or Levy processes. In Chapter 1 we provide an overview of recent work on descriptions and properties of the convex minorant of random walks and Levy processes as detailed in Chapter 2, [72] and [73]. This work rejuvenated the field of minorants, and led to the work in all the subsequent chapters. The results surveyed include point process descriptions of the convex minorant of random walks and Levy processes on a fixed finite interval, up to an independent exponential time, and in the infinite horizon case. These descriptions follow from the invariance of these processes under an adequate path transformation. In the case of Brownian motion, we note how further special properties of this process, including time-inversion, imply a sequential description for the convex minorant of the Brownian meander. This chapter is based on [3], which was co-written with Jim Pitman, Nathan Ross and Geronimo Uribe Bravo. Chapter 1 serves as a long introduction to Chapter 2, in which we offer a unified approach to the theory of concave majorants of random walks. The reasons for the switch from convex minorants to concave majorants are discussed in Section 1.1, but the results are all equivalent. This unified theory is arrived at by providing a path transformation for a walk of finite length that leaves the law of the walk unchanged whilst providing complete information about the concave majorant - the path transformation is different from the one discussed in Chapter 1, but this is necessary to deal with a more general case than the standard one as done in Section 2.6. The path transformation of Chapter 1, which is discussed in detail in Section 2.8, is more relevant to the limiting results for Levy processes that are of interest in Chapter 1. Our results lead to a description of a walk of random geometric length as a Poisson point process of excursions away from its concave majorant, which is then used to find a complete description of the concave majorant of a walk of infinite length. In the case where subsets of increments may have the same arithmetic mean (the more general case mentioned above), we investigate three nested compositions that naturally arise from our construction of the concave majorant. This chapter is based on [4], which was co-written with Jim Pitman. In Chapter 3, we study the Lipschitz minorant of a Levy process. For alpha > 0, the alpha-Lipschitz minorant of a function f : R→R is the greatest function m : R→R such that m ≤ f and | m(s) - m(t)| ≤ alpha |s - t| for all s, t ∈ R should such a function exist. If X = Xtt∈ R is a real-valued Levy process that is not pure linear drift with slope +/-alpha, then the sample paths of X have an alpha-Lipschitz minorant almost surely if and only if | E [X1]| < alpha. Denoting the minorant by M, we investigate properties of the random closed set Z := {t ∈ R : Mt = {Xt ∧ Xt-}, which, since it is regenerative and stationary, has the distribution of the closed range of some subordinator "made stationary" in a suitable sense. We give conditions for the contact set Z to be countable or to have zero Lebesgue measure, and we obtain formulas that characterize the Levy measure of the associated subordinator. We study the limit of Z as alpha → infinity and find for the so-called abrupt Levy processes introduced by Vigon that this limit is the set of local infima of X. When X is a Brownian motion with drift beta such that |beta| < alpha, we calculate explicitly the densities of various random variables related to the minorant. This chapter is based on [2], which was co-written with Steven N. Evans. Finally, in Chapter 4 we study the structure of the shocks for the inviscid Burgers equation in dimension 1 when the initial velocity is given by Levy noise, or equivalently when the initial potential is a two-sided Levy process This shock structure turns out to give rise to a parabolic minorant of the Levy process--see Section 4.2 for details. The main results are that when psi0 is abrupt in the sense of Vigon or has bounded variation with limsuph-2 h↓0y0 h=infinity , the set of points with zero velocity is regenerative, and that in the latter case this set is equal to the set of Lagrangian regular points, which is non-empty. When psi0 is abrupt the shock structure is discrete and when psi0 is eroded there are no rarefaction intervals. This chapter is based on [1].
Investigating the Randomness of Numbers
ERIC Educational Resources Information Center
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
Quenched bond randomness: Superfluidity in porous media and the strong violation of universality
NASA Astrophysics Data System (ADS)
Falicov, Alexis; Berker, A. Nihat
1997-04-01
The effects of quenched bond randomness are most readily studied with superfluidity immersed in a porous medium. A lattice model for3He-4He mixtures and incomplete4He fillings in aerogel yields the signature effect of bond randomness, namely the conversion of symmetry-breaking first-order phase transitions into second-order phase transitions, the λ-line reaching zero temperature, and the elimination of non-symmetry-breaking first-order phase transitions. The model recognizes the importance of the connected nature of aerogel randomness and thereby yields superfluidity at very low4He concentrations, a phase separation entirely within the superfluid phase, and the order-parameter contrast between mixtures and incomplete fillings, all in agreement with experiments. The special properties of the helium mixture/aerogel system are distinctly linked to the aerogel properties of connectivity, randomness, and tenuousness, via the additional study of a regularized “jungle-gym” aerogel. Renormalization-group calculations indicate that a strong violation of the empirical universality principle of critical phenomena occurs under quenched bond randomness. It is argued that helium/aerogel critical properties reflect this violation and further experiments are suggested. Renormalization-group analysis also shows that, adjoiningly to the strong universality violation (which hinges on the occurrence or non-occurrence of asymptotic strong coupling—strong randomness under rescaling), there is a new “hyperuniversality” at phase transitions with asymptotic strong coupling—strong randomness behavior, for example assigning the same critical exponents to random- bond tricriticality and random- field criticality.
NASA Astrophysics Data System (ADS)
Wu, Zhi-Xi; Rong, Zhihai; Yang, Han-Xin
2015-01-01
Recent empirical studies suggest that heavy-tailed distributions of human activities are universal in real social dynamics [L. Muchnik, S. Pei, L. C. Parra, S. D. S. Reis, J. S. Andrade Jr., S. Havlin, and H. A. Makse, Sci. Rep. 3, 1783 (2013), 10.1038/srep01783]. On the other hand, community structure is ubiquitous in biological and social networks [M. E. J. Newman, Nat. Phys. 8, 25 (2012), 10.1038/nphys2162]. Motivated by these facts, we here consider the evolutionary prisoner's dilemma game taking place on top of a real social network to investigate how the community structure and the heterogeneity in activity of individuals affect the evolution of cooperation. In particular, we account for a variation of the birth-death process (which can also be regarded as a proportional imitation rule from a social point of view) for the strategy updating under both weak and strong selection (meaning the payoffs harvested from games contribute either slightly or heavily to the individuals' performance). By implementing comparative studies, where the players are selected either randomly or in terms of their actual activities to play games with their immediate neighbors, we figure out that heterogeneous activity benefits the emergence of collective cooperation in a harsh environment (the action for cooperation is costly) under strong selection, whereas it impairs the formation of altruism under weak selection. Moreover, we find that the abundance of communities in the social network can evidently foster the formation of cooperation under strong selection, in contrast to the games evolving on randomized counterparts. Our results are therefore helpful for us to better understand the evolution of cooperation in real social systems.
Relaxation dynamics of maximally clustered networks
NASA Astrophysics Data System (ADS)
Klaise, Janis; Johnson, Samuel
2018-01-01
We study the relaxation dynamics of fully clustered networks (maximal number of triangles) to an unclustered state under two different edge dynamics—the double-edge swap, corresponding to degree-preserving randomization of the configuration model, and single edge replacement, corresponding to full randomization of the Erdős-Rényi random graph. We derive expressions for the time evolution of the degree distribution, edge multiplicity distribution and clustering coefficient. We show that under both dynamics networks undergo a continuous phase transition in which a giant connected component is formed. We calculate the position of the phase transition analytically using the Erdős-Rényi phenomenology.
On the Wigner law in dilute random matrices
NASA Astrophysics Data System (ADS)
Khorunzhy, A.; Rodgers, G. J.
1998-12-01
We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.
Abnormal neural hierarchy in processing of verbal information in patients with schizophrenia.
Lerner, Yulia; Bleich-Cohen, Maya; Solnik-Knirsh, Shimrit; Yogev-Seligmann, Galit; Eisenstein, Tamir; Madah, Waheed; Shamir, Alon; Hendler, Talma; Kremer, Ilana
2018-01-01
Previous research indicates abnormal comprehension of verbal information in patients with schizophrenia. Yet the neural mechanism underlying the breakdown of verbal information processing in schizophrenia is poorly understood. Imaging studies in healthy populations have shown a network of brain areas involved in hierarchical processing of verbal information over time. Here, we identified critical aspects of this hierarchy, examining patients with schizophrenia. Using functional magnetic resonance imaging, we examined various levels of information comprehension elicited by naturally presented verbal stimuli; from a set of randomly shuffled words to an intact story. Specifically, patients with first episode schizophrenia ( N = 15), their non-manifesting siblings ( N = 14) and healthy controls ( N = 15) listened to a narrated story and randomly scrambled versions of it. To quantify the degree of dissimilarity between the groups, we adopted an inter-subject correlation (inter-SC) approach, which estimates differences in synchronization of neural responses within and between groups. The temporal topography found in healthy and siblings groups were consistent with our previous findings - high synchronization in responses from early sensory toward high order perceptual and cognitive areas. In patients with schizophrenia, stimuli with short and intermediate temporal scales evoked a typical pattern of reliable responses, whereas story condition (long temporal scale) revealed robust and widespread disruption of the inter-SCs. In addition, the more similar the neural activity of patients with schizophrenia was to the average response in the healthy group, the less severe the positive symptoms of the patients. Our findings suggest that system-level neural indication of abnormal verbal information processing in schizophrenia reflects disease manifestations.
Esmaily, Habibollah; Tayefi, Maryam; Doosti, Hassan; Ghayour-Mobarhan, Majid; Nezami, Hossein; Amirabadizadeh, Alireza
2018-04-24
We aimed to identify the associated risk factors of type 2 diabetes mellitus (T2DM) using data mining approach, decision tree and random forest techniques using the Mashhad Stroke and Heart Atherosclerotic Disorders (MASHAD) Study program. A cross-sectional study. The MASHAD study started in 2010 and will continue until 2020. Two data mining tools, namely decision trees, and random forests, are used for predicting T2DM when some other characteristics are observed on 9528 subjects recruited from MASHAD database. This paper makes a comparison between these two models in terms of accuracy, sensitivity, specificity and the area under ROC curve. The prevalence rate of T2DM was 14% among these subjects. The decision tree model has 64.9% accuracy, 64.5% sensitivity, 66.8% specificity, and area under the ROC curve measuring 68.6%, while the random forest model has 71.1% accuracy, 71.3% sensitivity, 69.9% specificity, and area under the ROC curve measuring 77.3% respectively. The random forest model, when used with demographic, clinical, and anthropometric and biochemical measurements, can provide a simple tool to identify associated risk factors for type 2 diabetes. Such identification can substantially use for managing the health policy to reduce the number of subjects with T2DM .
Improving Search Algorithms by Using Intelligent Coordinates
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar
2004-01-01
We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.
Improving search algorithms by using intelligent coordinates
NASA Astrophysics Data System (ADS)
Wolpert, David; Tumer, Kagan; Bandari, Esfandiar
2004-01-01
We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent η is self-interested; it sets its variable to maximize its own function gη. Three factors govern such a distributed algorithm’s performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit all three factors by modifying a search algorithm’s exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based “player” engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.
NASA Astrophysics Data System (ADS)
Khonina, S. N.; Karpeev, S. V.; Paranin, V. D.
2018-06-01
A technique for simultaneous detection of individual vortex states of the beams propagating in a randomly inhomogeneous medium is proposed. The developed optical system relies on the correlation method that is invariant to the beam wandering. The intensity distribution formed at the optical system output does not require digital processing. The proposed technique based on a multi-order phase diffractive optical element (DOE) is studied numerically and experimentally. The developed detection technique is used for the analysis of Laguerre-Gaussian vortex beams propagating under conditions of intense absorption, reflection, and scattering in transparent and opaque microparticles in aqueous suspensions. The performed experimental studies confirm the relevance of the vortex phase dependence of a laser beam under conditions of significant absorption, reflection, and scattering of the light.
NASA Astrophysics Data System (ADS)
Yu, Mei; Wang, Chong; Yang, Cancan; Yu, Zhe
2017-11-01
With the great deformability of stretch, compression, bend and twisting, while preserving electrical property, metal films on elastomeric substrates have many applications for serving as bioelectrical interfaces. However, at present, most polymer-supported thin metal films reported rupture at small elongations (<10%). In this work, highly stretchable thin gold films were fabricated on PDMS substrates by a novel micro-processing technology. The as deposited films can be stretched by a maximum 120% strain while maintaining their electrical conductivity. Electrical characteristics of the gold films under single-cycle and multi-cycle stretch deformations are investigated in this work. SEM images imply that the gold films are under the structure of nanocracks. The mechanisms of the stretchability of the gold films can be explained by the nanocraks, which uniformly distribute with random orientation in the films.
Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging
NASA Astrophysics Data System (ADS)
Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi
2016-10-01
In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.
Zepf, Florian D; Gaber, Tilman J; Baurmann, David; Bubenzer, Sarah; Konrad, Kerstin; Herpertz-Dahlmann, Beate; Stadler, Christina; Poustka, Fritz; Wöckel, Lars
2010-08-01
Deficiencies in serotonergic (5-HT) neurotransmission have frequently been linked to altered attention and memory processes. With attention deficit hyperactivity disorder (ADHD) being associated with impaired attention and working memory, this study investigated the effects of a diminished 5-HT turnover achieved by rapid tryptophan depletion (RTD) on attentional performance in children and adolescents with ADHD. Twenty-two male patients with ADHD (aged 9-15 yr) received the RTD procedure Moja-De and a tryptophan (Trp)-balanced placebo (Pla) in a randomized, double-blind, within-subject crossover design on two separate study days. Lapses of attention (LA) and phasic alertness (PA) were assessed within the test battery for attentional performance under depleted and sham-depleted conditions 120 (T1), 220 (T2) and 300 (T3) min after intake of RTD/Pla. At T1 there was a significant main effect for RTD, indicating more LA under intake of a Trp-balanced Pla compared to diminished 5-HT neurotransmission. For T2/T3 there were no such effects. PA was not affected by the factors RTD/Pla and time. Interactions of 5-HT with other neurotransmitters as possible underlying neurochemical processes could be subject to further investigations involving healthy controls as regards altered attentional performance in children and adolescents.
Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing
NASA Technical Reports Server (NTRS)
Jones, Robert L.; Goode, Plesent W. (Technical Monitor)
2000-01-01
The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.
A new approach to evaluate gamma-ray measurements
NASA Technical Reports Server (NTRS)
Dejager, O. C.; Swanepoel, J. W. H.; Raubenheimer, B. C.; Vandervalt, D. J.
1985-01-01
Misunderstandings about the term random samples its implications may easily arise. Conditions under which the phases, obtained from arrival times, do not form a random sample and the dangers involved are discussed. Watson's U sup 2 test for uniformity is recommended for light curves with duty cycles larger than 10%. Under certain conditions, non-parametric density estimation may be used to determine estimates of the true light curve and its parameters.
Henckens, Marloes J A G; Klumpers, Floris; Everaerd, Daphne; Kooijman, Sabine C; van Wingen, Guido A; Fernández, Guillén
2016-04-01
Stress exposure is known to precipitate psychological disorders. However, large differences exist in how individuals respond to stressful situations. A major marker for stress sensitivity is hypothalamus-pituitary-adrenal (HPA)-axis function. Here, we studied how interindividual variance in both basal cortisol levels and stress-induced cortisol responses predicts differences in neural vigilance processing during stress exposure. Implementing a randomized, counterbalanced, crossover design, 120 healthy male participants were exposed to a stress-induction and control procedure, followed by an emotional perception task (viewing fearful and happy faces) during fMRI scanning. Stress sensitivity was assessed using physiological (salivary cortisol levels) and psychological measures (trait questionnaires). High stress-induced cortisol responses were associated with increased stress sensitivity as assessed by psychological questionnaires, a stronger stress-induced increase in medial temporal activity and greater differential amygdala responses to fearful as opposed to happy faces under control conditions. In contrast, high basal cortisol levels were related to relative stress resilience as reflected by higher extraversion scores, a lower stress-induced increase in amygdala activity and enhanced differential processing of fearful compared with happy faces under stress. These findings seem to reflect a critical role for HPA-axis signaling in stress coping; higher basal levels indicate stress resilience, whereas higher cortisol responsivity to stress might facilitate recovery in those individuals prone to react sensitively to stress. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Jaggi, Chandra K.; Mittal, Mandeep; Khanna, Aditi
2013-09-01
In this article, an Economic Order Quantity (EOQ) model has been developed with unreliable supply, where each received lot may have random fraction of defective items with known distribution. Thus, the inspection of lot becomes essential in almost all the situations. Moreover, its role becomes more significant when the items are deteriorating in nature. It is assumed that defective items are salvaged as a single batch after the screening process. Further, it has been observed that the demand as well as price for certain consumer items increases linearly with time, especially under inflationary conditions. Owing to this fact, this article investigates the impact of defective items on retailer's ordering policy for deteriorating items under inflation when both demand and price vary with the passage of time. The proposed model optimises the order quantity by maximising the retailer's expected profit. Results are demonstrated with the help of a numerical example and the sensitivity analysis is also presented to provide managerial insights into practice.
Dynamic response analysis of structure under time-variant interval process model
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao
2016-10-01
Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
Minimalist design of a robust real-time quantum random number generator
NASA Astrophysics Data System (ADS)
Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.
2015-08-01
We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.
Random numbers from vacuum fluctuations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com; Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543
2016-07-25
We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
NASA Astrophysics Data System (ADS)
Ghannad, Z.; Hakimi Pajouh, H.
2017-12-01
In this work, the motion of a dust particle under the influence of the random force due to dust charge fluctuations is considered as a non-Markovian stochastic process. Memory effects in the velocity process of the dust particle are studied. A model is developed based on the fractional Langevin equation for the motion of the dust grain. The fluctuation-dissipation theorem for the dust grain is derived from this equation. The mean-square displacement and the velocity autocorrelation function of the dust particle are obtained in terms of the Mittag-Leffler functions. Their asymptotic behavior and the dust particle temperature due to charge fluctuations are studied in the long-time limit. As an interesting result, it is found that the presence of memory effects in the velocity process of the dust particle as a non-Markovian process can cause an anomalous diffusion in dusty plasmas. In this case, the velocity autocorrelation function of the dust particle has a power-law decay like t - α - 2, where the exponent α take values 0 < α < 1.
Knowledge translation interventions for critically ill patients: a systematic review*.
Sinuff, Tasnim; Muscedere, John; Adhikari, Neill K J; Stelfox, Henry T; Dodek, Peter; Heyland, Daren K; Rubenfeld, Gordon D; Cook, Deborah J; Pinto, Ruxandra; Manoharan, Venika; Currie, Jan; Cahill, Naomi; Friedrich, Jan O; Amaral, Andre; Piquette, Dominique; Scales, Damon C; Dhanani, Sonny; Garland, Allan
2013-11-01
We systematically reviewed ICU-based knowledge translation studies to assess the impact of knowledge translation interventions on processes and outcomes of care. We searched electronic databases (to July, 2010) without language restrictions and hand-searched reference lists of relevant studies and reviews. Two reviewers independently identified randomized controlled trials and observational studies comparing any ICU-based knowledge translation intervention (e.g., protocols, guidelines, and audit and feedback) to management without a knowledge translation intervention. We focused on clinical topics that were addressed in greater than or equal to five studies. Pairs of reviewers abstracted data on the clinical topic, knowledge translation intervention(s), process of care measures, and patient outcomes. For each individual or combination of knowledge translation intervention(s) addressed in greater than or equal to three studies, we summarized each study using median risk ratio for dichotomous and standardized mean difference for continuous process measures. We used random-effects models. Anticipating a small number of randomized controlled trials, our primary meta-analyses included randomized controlled trials and observational studies. In separate sensitivity analyses, we excluded randomized controlled trials and collapsed protocols, guidelines, and bundles into one category of intervention. We conducted meta-analyses for clinical outcomes (ICU and hospital mortality, ventilator-associated pneumonia, duration of mechanical ventilation, and ICU length of stay) related to interventions that were associated with improvements in processes of care. From 11,742 publications, we included 119 investigations (seven randomized controlled trials, 112 observational studies) on nine clinical topics. Interventions that included protocols with or without education improved continuous process measures (seven observational studies and one randomized controlled trial; standardized mean difference [95% CI]: 0.26 [0.1, 0.42]; p = 0.001 and four observational studies and one randomized controlled trial; 0.83 [0.37, 1.29]; p = 0.0004, respectively). Heterogeneity among studies within topics ranged from low to extreme. The exclusion of randomized controlled trials did not change our results. Single-intervention and lower-quality studies had higher standardized mean differences compared to multiple-intervention and higher-quality studies (p = 0.013 and 0.016, respectively). There were no associated improvements in clinical outcomes. Knowledge translation interventions in the ICU that include protocols with or without education are associated with the greatest improvements in processes of critical care.
Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.
Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai
2017-02-20
Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.
Information content versus word length in random typing
NASA Astrophysics Data System (ADS)
Ferrer-i-Cancho, Ramon; Moscoso del Prado Martín, Fermín
2011-12-01
Recently, it has been claimed that a linear relationship between a measure of information content and word length is expected from word length optimization and it has been shown that this linearity is supported by a strong correlation between information content and word length in many languages (Piantadosi et al 2011 Proc. Nat. Acad. Sci. 108 3825). Here, we study in detail some connections between this measure and standard information theory. The relationship between the measure and word length is studied for the popular random typing process where a text is constructed by pressing keys at random from a keyboard containing letters and a space behaving as a word delimiter. Although this random process does not optimize word lengths according to information content, it exhibits a linear relationship between information content and word length. The exact slope and intercept are presented for three major variants of the random typing process. A strong correlation between information content and word length can simply arise from the units making a word (e.g., letters) and not necessarily from the interplay between a word and its context as proposed by Piantadosi and co-workers. In itself, the linear relation does not entail the results of any optimization process.
A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents
Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha
2017-01-01
Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates. PMID:28446872
A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.
Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha
2017-01-01
Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.