NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Zeiler, Thomas A.; Perry, Boyd, III
1989-01-01
This paper describes and illustrates two ways of performing time-correlated gust-load calculations. The first is based on Matched Filter Theory; the second on Random Process Theory. Both approaches yield theoretically identical results and represent novel applications of the theories, are computationally fast, and may be applied to other dynamic-response problems. A theoretical development and example calculations using both Matched Filter Theory and Random Process Theory approaches are presented.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Zeiler, Thomas A.; Perry, Boyd, III
1989-01-01
Two ways of performing time-correlated gust-load calculations are described and illustrated. The first is based on Matched Filter Theory; the second on Random Process Theory. Both approaches yield theoretically identical results and represent novel applications of the theories, are computationally fast, and may be applied to other dynamic-response problems. A theoretical development and example calculations using both Matched Filter Theory and Random Process Theory approaches are presented.
1980-11-26
and J.B. Thomas, "The Effect of a Memoryless Nonlinearity on the Spectrum of a Random Process," IEEE Transactions on Information Theory, Vol. IT-23, pp...Density Function from Measurements Corrupted by Poisson Noise," IEEE Transactions on Information Theory, Vol. IT-23, pp. 764-766, November 1977. H. Derin...pp. 243-249, December 1977. G.L. Wise and N.C. Gallagher, "On Spherically Invariant Random Processes," IEEE Transactions on Information Theory, Vol. IT
An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response
Stipčević, Mario; Ursin, Rupert
2015-01-01
Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576
Relationships between digital signal processing and control and estimation theory
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1978-01-01
Research areas associated with digital signal processing and control and estimation theory are identified. Particular attention is given to image processing, system identification problems (parameter identification, linear prediction, least squares, Kalman filtering), stability analyses (the use of the Liapunov theory, frequency domain criteria, passivity), and multiparameter systems, distributed processes, and random fields.
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Pototzky, Anthony S.
1989-01-01
A theoretical basis and example calculations are given that demonstrate the relationship between the Matched Filter Theory approach to the calculation of time-correlated gust loads and Phased Design Load Analysis in common use in the aerospace industry. The relationship depends upon the duality between Matched Filter Theory and Random Process Theory and upon the fact that Random Process Theory is used in Phased Design Loads Analysis in determining an equiprobable loads design ellipse. Extensive background information describing the relevant points of Phased Design Loads Analysis, calculating time-correlated gust loads with Matched Filter Theory, and the duality between Matched Filter Theory and Random Process Theory is given. It is then shown that the time histories of two time-correlated gust load responses, determined using the Matched Filter Theory approach, can be plotted as parametric functions of time and that the resulting plot, when superposed upon the design ellipse corresponding to the two loads, is tangent to the ellipse. The question is raised of whether or not it is possible for a parametric load plot to extend outside the associated design ellipse. If it is possible, then the use of the equiprobable loads design ellipse will not be a conservative design practice in some circumstances.
ERIC Educational Resources Information Center
Lavenda, Bernard H.
1985-01-01
Explains the phenomenon of Brownian motion, which serves as a mathematical model for random processes. Topics addressed include kinetic theory, Einstein's theory, particle displacement, and others. Points out that observations of the random course of a particle suspended in fluid led to the first accurate measurement of atomic mass. (DH)
Using circuit theory to model connectivity in ecology, evolution, and conservation.
McRae, Brad H; Dickson, Brett G; Keitt, Timothy H; Shah, Viral B
2008-10-01
Connectivity among populations and habitats is important for a wide range of ecological processes. Understanding, preserving, and restoring connectivity in complex landscapes requires connectivity models and metrics that are reliable, efficient, and process based. We introduce a new class of ecological connectivity models based in electrical circuit theory. Although they have been applied in other disciplines, circuit-theoretic connectivity models are new to ecology. They offer distinct advantages over common analytic connectivity models, including a theoretical basis in random walk theory and an ability to evaluate contributions of multiple dispersal pathways. Resistance, current, and voltage calculated across graphs or raster grids can be related to ecological processes (such as individual movement and gene flow) that occur across large population networks or landscapes. Efficient algorithms can quickly solve networks with millions of nodes, or landscapes with millions of raster cells. Here we review basic circuit theory, discuss relationships between circuit and random walk theories, and describe applications in ecology, evolution, and conservation. We provide examples of how circuit models can be used to predict movement patterns and fates of random walkers in complex landscapes and to identify important habitat patches and movement corridors for conservation planning.
Random Error in Judgment: The Contribution of Encoding and Retrieval Processes
ERIC Educational Resources Information Center
Pleskac, Timothy J.; Dougherty, Michael R.; Rivadeneira, A. Walkyria; Wallsten, Thomas S.
2009-01-01
Theories of confidence judgments have embraced the role random error plays in influencing responses. An important next step is to identify the source(s) of these random effects. To do so, we used the stochastic judgment model (SJM) to distinguish the contribution of encoding and retrieval processes. In particular, we investigated whether dividing…
The Need for a Kinetics for Biological Transport
Schindler, A. M.; Iberall, A. S.
1973-01-01
The traditional theory of transport across capillary membranes via a laminar Poiseuille flow is shown to be invalid. It is demonstrated that the random, diffusive nature of the molecular flow and interactions with the “pore” walls play an important role in the transport process. Neither the continuum Navier-Stokes theory nor the equivalent theory of irreversible thermodynamics is adequate to treat the problem. Combination of near-continuum hydrodynamic theory, noncontinuum kinetic theory, and the theory of fluctuations provides a first step toward modeling both liquid processes in general and membrane transport processes as a specific application. PMID:4726880
Theory-Driven Process Evaluation of a Complementary Feeding Trial in Four Countries
ERIC Educational Resources Information Center
Newman, Jamie E.; Garces, Ana; Mazariegos, Manolo; Hambidge, K. Michael; Manasyan, Albert; Tshefu, Antoinette; Lokangaka, Adrien; Sami, Neelofar; Carlo, Waldemar A.; Bose, Carl L.; Pasha, Omrana; Goco, Norman; Chomba, Elwyn; Goldenberg, Robert L.; Wright, Linda L.; Koso-Thomas, Marion; Krebs, Nancy F.
2014-01-01
We conducted a theory-driven process evaluation of a cluster randomized controlled trial comparing two types of complementary feeding (meat versus fortified cereal) on infant growth in Guatemala, Pakistan, Zambia and the Democratic Republic of Congo. We examined process evaluation indicators for the entire study cohort (N = 1236) using chi-square…
Aircraft adaptive learning control
NASA Technical Reports Server (NTRS)
Lee, P. S. T.; Vanlandingham, H. F.
1979-01-01
The optimal control theory of stochastic linear systems is discussed in terms of the advantages of distributed-control systems, and the control of randomly-sampled systems. An optimal solution to longitudinal control is derived and applied to the F-8 DFBW aircraft. A randomly-sampled linear process model with additive process and noise is developed.
NASA Technical Reports Server (NTRS)
Holmquist, R.
1978-01-01
The random evolutionary hits (REH) theory of evolutionary divergence, originally proposed in 1972, is restated with attention to certain aspects of the theory that have caused confusion. The theory assumes that natural selection and stochastic processes interact and that natural selection restricts those codon sites which may fix mutations. The predicted total number of fixed nucleotide replacements agrees with data for cytochrome c, a-hemoglobin, beta-hemoglobin, and myoglobin. The restatement analyzes the magnitude of possible sources of errors and simplifies calculational methodology by supplying polynomial expressions to replace tables and graphs.
United States Air Force Summer Faculty Research Program (1987). Program Technical Report. Volume 2.
1987-12-01
the area of statistical inference, distribution theory and stochastic * •processes. I have taught courses in random processes and sample % j .functions...controlled phase separation of isotropic, binary mixtures, the theory of spinodal decomposition has been developed by Cahn and Hilliard.5 ,6 This theory is...peak and its initial rate of growth at a given temperature are predicted by the spinodal theory . The angle of maximum intensity is then determined by
Probabilistic Estimation of Rare Random Collisions in 3 Space
2009-03-01
extended Poisson process as a feature of probability theory. With the bulk of research in extended Poisson processes going into parame- ter estimation, the...application of extended Poisson processes to spatial processes is largely untouched. Faddy performed a short study of spatial data, but overtly...the theory of extended Poisson processes . To date, the processes are limited in that the rates only depend on the number of arrivals at some time
NASA Astrophysics Data System (ADS)
Vanmarcke, Erik
1983-03-01
Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
An improved exceedance theory for combined random stresses
NASA Technical Reports Server (NTRS)
Lester, H. C.
1974-01-01
An extension is presented of Rice's classic solution for the exceedances of a constant level by a single random process to its counterpart for an n-dimensional vector process. An interaction boundary, analogous to the constant level considered by Rice for the one-dimensional case, is assumed in the form of a hypersurface. The theory for the numbers of boundary exceedances is developed by using a joint statistical approach which fully accounts for all cross-correlation effects. An exact expression is derived for the n-dimensional exceedance density function, which is valid for an arbitrary interaction boundary. For application to biaxial states of combined random stress, the general theory is reduced to the two-dimensional case. An elliptical stress interaction boundary is assumed and the exact expression for the density function is presented. The equations are expressed in a format which facilitates calculating the exceedances by numerically evaluating a line integral. The behavior of the density function for the two-dimensional case is briefly discussed.
On fatigue crack growth under random loading
NASA Astrophysics Data System (ADS)
Zhu, W. Q.; Lin, Y. K.; Lei, Y.
1992-09-01
A probabilistic analysis of the fatigue crack growth, fatigue life and reliability of a structural or mechanical component is presented on the basis of fracture mechanics and theory of random processes. The material resistance to fatigue crack growth and the time-history of the stress are assumed to be random. Analytical expressions are obtained for the special case in which the random stress is a stationary narrow-band Gaussian random process, and a randomized Paris-Erdogan law is applicable. As an example, the analytical method is applied to a plate with a central crack, and the results are compared with those obtained from digital Monte Carlo simulations.
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
Slow diffusion by Markov random flights
NASA Astrophysics Data System (ADS)
Kolesnik, Alexander D.
2018-06-01
We present a conception of the slow diffusion processes in the Euclidean spaces Rm , m ≥ 1, based on the theory of random flights with small constant speed that are driven by a homogeneous Poisson process of small rate. The slow diffusion condition that, on long time intervals, leads to the stationary distributions, is given. The stationary distributions of slow diffusion processes in some Euclidean spaces of low dimensions, are presented.
ERIC Educational Resources Information Center
Dunlop, David L.
Reported is another study related to the Project on an Information Memory Model. This study involved using information theory to investigate the concepts of primacy and recency as they were exhibited by ninth-grade science students while processing a biological sorting problem and an immediate, abstract recall task. Two hundred randomly selected…
Optimum Array Processing for Detecting Binary Signals Corrupted by Directional Interference.
1972-12-01
specific cases. Two different series representations of a vector random process are discussed in Van Trees [3]. These two methods both require the... spaci ~ng d, etc.) its detection error represents a lower bound for the performance that might be obtained with other types of array processing (such...Middleton, Introduction to Statistical Communication Theory, New York: McGraw-Hill, 1960. 3. H.L. Van Trees , Detection, Estimation, and Modulation Theory
Quantum Random Number Generation Using a Quanta Image Sensor
Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.
2016-01-01
A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698
NASA Astrophysics Data System (ADS)
Abramov, G. V.; Emeljanov, A. E.; Ivashin, A. L.
Theoretical bases for modeling a digital control system with information transfer via the channel of plural access and a regular quantization cycle are submitted. The theory of dynamic systems with random changes of the structure including elements of the Markov random processes theory is used for a mathematical description of a network control system. The characteristics of similar control systems are received. Experimental research of the given control systems is carried out.
Electronic Noise and Fluctuations in Solids
NASA Astrophysics Data System (ADS)
Kogan, Sh.
2008-07-01
Preface; Part I. Introduction. Some Basic Concepts of the Theory of Random Processes: 1. Probability density functions. Moments. Stationary processes; 2. Correlation function; 3. Spectral density of noise; 4. Ergodicity and nonergodicity of random processes; 5. Random pulses and shot noise; 6. Markov processes. General theory; 7. Discrete Markov processes. Random telegraph noise; 8. Quasicontinuous (Diffusion-like) Markov processes; 9. Brownian motion; 10. Langevin approach to the kinetics of fluctuations; Part II. Fluctuation-Dissipation Relations in Equilibrium Systems: 11. Derivation of fluctuation-dissipation relations; 12. Equilibrium noise in quasistationary circuits. Nyquist theorem; 13. Fluctuations of electromagnetic fields in continuous media; Part III. Fluctuations in Nonequilibrium Gases: 14. Some basic concepts of hot-electrons' physics; 15. Simple model of current fluctuations in a semiconductor with hot electrons; 16. General kinetic theory of quasiclassical fluctuations in a gas of particles. The Boltzmann-Langevin equation; 17. Current fluctuations and noise temperature; 18. Current fluctuations and diffusion in a gas of hot electrons; 19. One-time correlation in nonequilibrium gases; 20. Intervalley noise in multivalley semiconductors; 21. Noise of hot electrons emitting optical phonons in the streaming regime; 22. Noise in a semiconductor with a postbreakdown stable current filament; Part IV. Generation-recombination noise: 23. G-R noise in uniform unipolar semiconductors; 24. Noise produced by recombination and diffusion; Part V. Noise in quantum ballistic systems: 25. Introduction; 26. Equilibrium noise and shot noise in quantum conductors; 27. Modulation noise in quantum point contacts; 28. Transition from a ballistic conductor to a macroscopic one; 29. Noise in tunnel junctions; Part VI. Resistance noise in metals: 30. Incoherent scattering of electrons by mobile defects; 31. Effect of mobile scattering centers on the electron interference pattern; 32. Fluctuations of the number of diffusing scattering centers; 33. Temperature fluctuations and the corresponding noise; Part VII. Noise in strongly disordered conductors: 34. Basic ideas of the percolation theory; 35. Resistance fluctuations in percolation systems. 36. Experiments; Part VIII. Low-frequency noise with an 1/f-type spectrum and random telegraph noise: 37. Introduction; 38. Some general properties of 1/f noise; 39. Basic models of 1/f noise; 40./f noise in metals; 41. Low-frequency noise in semiconductors; 42. Magnetic noise in spin glasses and some other magnetic systems; 43. Temperature fluctuations as a possible source of 1/f noise; 44. Random telegraph noise; 45. Fluctuations with 1/f spectrum in other systems; 46. General conclusions on 1/f noise; Part IX. Noise in Superconductors and Superconducting Structures: 47. Noise in Josephson junctions; 48. Noise in type II superconductors; References; Subject index.
Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces
NASA Astrophysics Data System (ADS)
Vacaru, S. I.
2012-03-01
We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-05-01
Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.
Antoneli, Fernando; Ferreira, Renata C; Briones, Marcelo R S
2016-06-01
Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) one may obtain the classical rate equations form the corresponding stochastic version by averaging the dynamic random variables (small noise limit). It is important to emphasize that unlike the deterministic case, where coupling two equations is a trivial matter, coupling two RDS is non-trivial, specially in our case, where the coupling is performed between a state variable of one gene and the switching stochastic process of another gene and, hence, it is not a priori true that the resulting coupled system will satisfy the definition of a random dynamical system. We shall provide the necessary arguments that ensure that our coupling prescription does indeed furnish a coupled regulatory network of random dynamical systems. Finally, the fact that classical rate equations are the small noise limit of our stochastic model ensures that any validation or prediction made on the basis of the classical theory is also a validation or prediction of our model. We illustrate our framework with some simple examples of single-gene system and network motifs. Copyright © 2016 Elsevier Inc. All rights reserved.
Preliminary Characterization of Erythrocytes Deformability on the Entropy-Complexity Plane
Korol, Ana M; D’Arrigo, Mabel; Foresto, Patricia; Pérez, Susana; Martín, Maria T; Rosso, Osualdo A
2010-01-01
We present an application of wavelet-based Information Theory quantifiers (Normalized Total Shannon Entropy, MPR-Statistical Complexity and Entropy-Complexity plane) on red blood cells membrane viscoelasticity characterization. These quantifiers exhibit important localization advantages provided by the Wavelet Theory. The present approach produces a clear characterization of this dynamical system, finding out an evident manifestation of a random process on the red cell samples of healthy individuals, and its sharp reduction of randomness on analyzing a human haematological disease, such as β-thalassaemia minor. PMID:21611139
Fractional Stochastic Field Theory
NASA Astrophysics Data System (ADS)
Honkonen, Juha
2018-02-01
Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.
Stochastic arbitrage return and its implication for option pricing
NASA Astrophysics Data System (ADS)
Fedotov, Sergei; Panayides, Stephanos
2005-01-01
The purpose of this work is to explore the role that random arbitrage opportunities play in pricing financial derivatives. We use a non-equilibrium model to set up a stochastic portfolio, and for the random arbitrage return, we choose a stationary ergodic random process rapidly varying in time. We exploit the fact that option price and random arbitrage returns change on different time scales which allows us to develop an asymptotic pricing theory involving the central limit theorem for random processes. We restrict ourselves to finding pricing bands for options rather than exact prices. The resulting pricing bands are shown to be independent of the detailed statistical characteristics of the arbitrage return. We find that the volatility “smile” can also be explained in terms of random arbitrage opportunities.
2013-10-01
changes in brain function have indicated significant increases in brain activity supporting theory of mind and emotion regulation abilities in...depicted in the scenarios; adults with ASD have previously shown hypoactivation in the temporo- parietal-junction theory of mind (ToM) network when...treatment MRI data have been collected with processing speed, perspective-taking, theory of mind , and emotion regulation fMRI measures on participants
Multiobjective optimization in structural design with uncertain parameters and stochastic processes
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.
Melnikov processes and chaos in randomly perturbed dynamical systems
NASA Astrophysics Data System (ADS)
Yagasaki, Kazuyuki
2018-07-01
We consider a wide class of randomly perturbed systems subjected to stationary Gaussian processes and show that chaotic orbits exist almost surely under some nondegenerate condition, no matter how small the random forcing terms are. This result is very contrasting to the deterministic forcing case, in which chaotic orbits exist only if the influence of the forcing terms overcomes that of the other terms in the perturbations. To obtain the result, we extend Melnikov’s method and prove that the corresponding Melnikov functions, which we call the Melnikov processes, have infinitely many zeros, so that infinitely many transverse homoclinic orbits exist. In addition, a theorem on the existence and smoothness of stable and unstable manifolds is given and the Smale–Birkhoff homoclinic theorem is extended in an appropriate form for randomly perturbed systems. We illustrate our theory for the Duffing oscillator subjected to the Ornstein–Uhlenbeck process parametrically.
Dynamical influence processes on networks: general theory and applications to social contagion.
Harris, Kameron Decker; Danforth, Christopher M; Dodds, Peter Sheridan
2013-08-01
We study binary state dynamics on a network where each node acts in response to the average state of its neighborhood. By allowing varying amounts of stochasticity in both the network and node responses, we find different outcomes in random and deterministic versions of the model. In the limit of a large, dense network, however, we show that these dynamics coincide. We construct a general mean-field theory for random networks and show this predicts that the dynamics on the network is a smoothed version of the average response function dynamics. Thus, the behavior of the system can range from steady state to chaotic depending on the response functions, network connectivity, and update synchronicity. As a specific example, we model the competing tendencies of imitation and nonconformity by incorporating an off-threshold into standard threshold models of social contagion. In this way, we attempt to capture important aspects of fashions and societal trends. We compare our theory to extensive simulations of this "limited imitation contagion" model on Poisson random graphs, finding agreement between the mean-field theory and stochastic simulations.
Decisionmaking under risk in invasive species management: risk management theory and applications
Shefali V. Mehta; Robert G. Haight; Frances R. Homans
2010-01-01
Invasive species management is closely entwined with the assessment and management of risk that arises from the inherently random nature of the invasion process. The theory and application of risk management for invasive species with an economic perspective is reviewed in this synthesis. Invasive species management can be delineated into three general categories:...
ERIC Educational Resources Information Center
Freeland, Peter
2013-01-01
Charles Darwin supposed that evolution involved a process of gradual change, generated randomly, with the selection and retention over many generations of survival-promoting features. Some theists have never accepted this idea. "Intelligent design" is a relatively recent theory, supposedly based on scientific evidence, which attempts to…
Agrell, U
1986-07-01
A general stochastic theory of cancer is outlined by applying to cancer the laws of quantum mechanics instead of the laws of traditional physics, especially with regard to the concept of cause. This theory is combined with the evolutionary theory on the one hand and the mutation theory of aging/death of multicellular beings consisting of somatic cells on the other. The cancer theory centers around the phenomenon of DNA mutating randomly by quantal steps. Because of mutations in the DNA in general as well as in the special DNA which codes for the DNA repairing systems the body is permeated in the course of time - via increasing losses of information in the DNA - with increasingly altered proteins which is observed as aging process. From this process of entropy the concept of the cancer cell is deduced: When the losses of information in a certain cell and also in the repairing and immunological systems have random concordances, cancer as a type of antigens comes into existence. Here the concept of CONCORDANCE OF "BLURRING" is introduced. This CONCORDANCE OF "BLURRING" occurs randomly approximately once among three times 60 000 billion cells, i.e. three human beings. The so-called "oncogenes" are integrated into this theory. It is proposed to test this theory using monozygotic twins both suffering from cancer: By injecting monoclonally multiplied immunological systems, eventually also repair-systems, from the respective other twin, the proposition is that the cancer would be cured in both twins. If this critical experiment is successful, one can cure human beings suffering from cancer by the same procedure, using those systems of their relatives. This treatment would cure the cancer to the extent to which there is a genetic correspondence in the sections of genes coding for these systems.
Asymptotics of small deviations of the Bogoliubov processes with respect to a quadratic norm
NASA Astrophysics Data System (ADS)
Pusev, R. S.
2010-10-01
We obtain results on small deviations of Bogoliubov’s Gaussian measure occurring in the theory of the statistical equilibrium of quantum systems. For some random processes related to Bogoliubov processes, we find the exact asymptotic probability of their small deviations with respect to a Hilbert norm.
Birds of a Feather Bully Together: Group Processes and Children's Responses to Bullying
ERIC Educational Resources Information Center
Jones, Sian E.; Manstead, Antony S. R.; Livingstone, Andrew
2009-01-01
Recent research has shown that a group-level analysis can inform our understanding of school bullying. The present research drew on social identity theory and intergroup emotion theory. Nine- to eleven-year olds were randomly assigned to the same group as story characters who were described as engaging in bullying, as being bullied, or as neither…
Randomized central limit theorems: A unified theory.
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Randomized central limit theorems: A unified theory
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Collective relaxation dynamics of small-world networks
NASA Astrophysics Data System (ADS)
Grabow, Carsten; Grosskinsky, Stefan; Kurths, Jürgen; Timme, Marc
2015-05-01
Complex networks exhibit a wide range of collective dynamic phenomena, including synchronization, diffusion, relaxation, and coordination processes. Their asymptotic dynamics is generically characterized by the local Jacobian, graph Laplacian, or a similar linear operator. The structure of networks with regular, small-world, and random connectivities are reasonably well understood, but their collective dynamical properties remain largely unknown. Here we present a two-stage mean-field theory to derive analytic expressions for network spectra. A single formula covers the spectrum from regular via small-world to strongly randomized topologies in Watts-Strogatz networks, explaining the simultaneous dependencies on network size N , average degree k , and topological randomness q . We present simplified analytic predictions for the second-largest and smallest eigenvalue, and numerical checks confirm our theoretical predictions for zero, small, and moderate topological randomness q , including the entire small-world regime. For large q of the order of one, we apply standard random matrix theory, thereby overarching the full range from regular to randomized network topologies. These results may contribute to our analytic and mechanistic understanding of collective relaxation phenomena of network dynamical systems.
Collective relaxation dynamics of small-world networks.
Grabow, Carsten; Grosskinsky, Stefan; Kurths, Jürgen; Timme, Marc
2015-05-01
Complex networks exhibit a wide range of collective dynamic phenomena, including synchronization, diffusion, relaxation, and coordination processes. Their asymptotic dynamics is generically characterized by the local Jacobian, graph Laplacian, or a similar linear operator. The structure of networks with regular, small-world, and random connectivities are reasonably well understood, but their collective dynamical properties remain largely unknown. Here we present a two-stage mean-field theory to derive analytic expressions for network spectra. A single formula covers the spectrum from regular via small-world to strongly randomized topologies in Watts-Strogatz networks, explaining the simultaneous dependencies on network size N, average degree k, and topological randomness q. We present simplified analytic predictions for the second-largest and smallest eigenvalue, and numerical checks confirm our theoretical predictions for zero, small, and moderate topological randomness q, including the entire small-world regime. For large q of the order of one, we apply standard random matrix theory, thereby overarching the full range from regular to randomized network topologies. These results may contribute to our analytic and mechanistic understanding of collective relaxation phenomena of network dynamical systems.
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
Daniels, Marcus G; Farmer, J Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-14
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
NASA Astrophysics Data System (ADS)
Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-01
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
Correlative weighted stacking for seismic data in the wavelet domain
Zhang, S.; Xu, Y.; Xia, J.; ,
2004-01-01
Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.
Magis, David
2014-11-01
In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.
Valenzuela, Carlos Y
2013-01-01
The Neutral Theory of Evolution (NTE) proposes mutation and random genetic drift as the most important evolutionary factors. The most conspicuous feature of evolution is the genomic stability during paleontological eras and lack of variation among taxa; 98% or more of nucleotide sites are monomorphic within a species. NTE explains this homology by random fixation of neutral bases and negative selection (purifying selection) that does not contribute either to evolution or polymorphisms. Purifying selection is insufficient to account for this evolutionary feature and the Nearly-Neutral Theory of Evolution (N-NTE) included negative selection with coefficients as low as mutation rate. These NTE and N-NTE propositions are thermodynamically (tendency to random distributions, second law), biotically (recurrent mutation), logically and mathematically (resilient equilibria instead of fixation by drift) untenable. Recurrent forward and backward mutation and random fluctuations of base frequencies alone in a site make life organization and fixations impossible. Drift is not a directional evolutionary factor, but a directional tendency of matter-energy processes (second law) which threatens the biotic organization. Drift cannot drive evolution. In a site, the mutation rates among bases and selection coefficients determine the resilient equilibrium frequency of bases that genetic drift cannot change. The expected neutral random interaction among nucleotides is zero; however, huge interactions and periodicities were found between bases of dinucleotides separated by 1, 2... and more than 1,000 sites. Every base is co-adapted with the whole genome. Neutralists found that neutral evolution is independent of population size (N); thus neutral evolution should be independent of drift, because drift effect is dependent upon N. Also, chromosome size and shape as well as protein size are far from random.
A geometric theory for Lévy distributions
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2014-08-01
Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts of the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.
A geometric theory for Lévy distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
2014-08-15
Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts ofmore » the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.« less
Performance Assessment in Serious Games: Compensating for the Effects of Randomness
ERIC Educational Resources Information Center
Westera, Wim
2016-01-01
This paper is about performance assessment in serious games. We conceive serious gaming as a process of player-lead decision taking. Starting from combinatorics and item-response theory we provide an analytical model that makes explicit to what extent observed player performances (decisions) are blurred by chance processes (guessing behaviors). We…
Random Matrix Theory in molecular dynamics analysis.
Palese, Luigi Leonardo
2015-01-01
It is well known that, in some situations, principal component analysis (PCA) carried out on molecular dynamics data results in the appearance of cosine-shaped low index projections. Because this is reminiscent of the results obtained by performing PCA on a multidimensional Brownian dynamics, it has been suggested that short-time protein dynamics is essentially nothing more than a noisy signal. Here we use Random Matrix Theory to analyze a series of short-time molecular dynamics experiments which are specifically designed to be simulations with high cosine content. We use as a model system the protein apoCox17, a mitochondrial copper chaperone. Spectral analysis on correlation matrices allows to easily differentiate random correlations, simply deriving from the finite length of the process, from non-random signals reflecting the intrinsic system properties. Our results clearly show that protein dynamics is not really Brownian also in presence of the cosine-shaped low index projections on principal axes. Copyright © 2014 Elsevier B.V. All rights reserved.
Large-deviation theory for diluted Wishart random matrices
NASA Astrophysics Data System (ADS)
Castillo, Isaac Pérez; Metz, Fernando L.
2018-03-01
Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.
Comparison of Image Processing Techniques using Random Noise Radar
2014-03-27
detection UWB ultra-wideband EM electromagnetic CW continuous wave RCS radar cross section RFI radio frequency interference FFT fast Fourier transform...several factors including radar cross section (RCS), orientation, and material makeup. A single monostatic radar at some position collects only range and...Chapter 2 is to provide the theory behind noise radar and SAR imaging. Section 2.1 presents the basic concepts in transmitting and receiving random
Long-term strength and damage accumulation in laminates
NASA Astrophysics Data System (ADS)
Dzenis, Yuris A.; Joshi, Shiv P.
1993-04-01
A modified version of the probabilistic model developed by authors for damage evolution analysis of laminates subjected to random loading is utilized to predict long-term strength of laminates. The model assumes that each ply in a laminate consists of a large number of mesovolumes. Probabilistic variation functions for mesovolumes stiffnesses as well as strengths are used in the analysis. Stochastic strains are calculated using the lamination theory and random function theory. Deterioration of ply stiffnesses is calculated on the basis of the probabilities of mesovolumes failures using the theory of excursions of random process beyond the limits. Long-term strength and damage accumulation in a Kevlar/epoxy laminate under tension and complex in-plane loading are investigated. Effects of the mean level and stochastic deviation of loading on damage evolution and time-to-failure of laminate are discussed. Long-term cumulative damage at the time of the final failure at low loading levels is more than at high loading levels. The effect of the deviation in loading is more pronounced at lower mean loading levels.
A correlated Walks' theory for DNA denaturation
NASA Astrophysics Data System (ADS)
Mejdani, R.
1994-08-01
We have shown that by using a correlated Walks' theory for the lattice gas model on a one-dimensional lattice, we can study, beside the saturation curves obtained before for the enzyme kinetics, also the DNA denaturation process. In the limit of no interactions between sites the equation for melting curves of DNA reduces to the random model equation. Thus our leads naturally to this classical equation in the limiting case.
MODEL FOR INSTANTANEOUS RESIDENTIAL WATER DEMANDS
Residential wateer use is visualized as a customer-server interaction often encountered in queueing theory. Individual customers are assumed to arrive according to a nonhomogeneous Poisson process, then engage water servers for random lengths of time. Busy servers are assumed t...
CBT Specific Process in Exposure-Based Treatments: Initial Examination in a Pediatric OCD Sample
Benito, Kristen Grabill; Conelea, Christine; Garcia, Abbe M.; Freeman, Jennifer B.
2012-01-01
Cognitive-Behavioral theory and empirical support suggest that optimal activation of fear is a critical component for successful exposure treatment. Using this theory, we developed coding methodology for measuring CBT-specific process during exposure. We piloted this methodology in a sample of young children (N = 18) who previously received CBT as part of a randomized controlled trial. Results supported the preliminary reliability and predictive validity of coding variables with 12 week and 3 month treatment outcome data, generally showing results consistent with CBT theory. However, given our limited and restricted sample, additional testing is warranted. Measurement of CBT-specific process using this methodology may have implications for understanding mechanism of change in exposure-based treatments and for improving dissemination efforts through identification of therapist behaviors associated with improved outcome. PMID:22523609
Watson, Richard A; Szathmáry, Eörs
2016-02-01
The theory of evolution links random variation and selection to incremental adaptation. In a different intellectual domain, learning theory links incremental adaptation (e.g., from positive and/or negative reinforcement) to intelligent behaviour. Specifically, learning theory explains how incremental adaptation can acquire knowledge from past experience and use it to direct future behaviours toward favourable outcomes. Until recently such cognitive learning seemed irrelevant to the 'uninformed' process of evolution. In our opinion, however, new results formally linking evolutionary processes to the principles of learning might provide solutions to several evolutionary puzzles - the evolution of evolvability, the evolution of ecological organisation, and evolutionary transitions in individuality. If so, the ability for evolution to learn might explain how it produces such apparently intelligent designs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Information content versus word length in random typing
NASA Astrophysics Data System (ADS)
Ferrer-i-Cancho, Ramon; Moscoso del Prado Martín, Fermín
2011-12-01
Recently, it has been claimed that a linear relationship between a measure of information content and word length is expected from word length optimization and it has been shown that this linearity is supported by a strong correlation between information content and word length in many languages (Piantadosi et al 2011 Proc. Nat. Acad. Sci. 108 3825). Here, we study in detail some connections between this measure and standard information theory. The relationship between the measure and word length is studied for the popular random typing process where a text is constructed by pressing keys at random from a keyboard containing letters and a space behaving as a word delimiter. Although this random process does not optimize word lengths according to information content, it exhibits a linear relationship between information content and word length. The exact slope and intercept are presented for three major variants of the random typing process. A strong correlation between information content and word length can simply arise from the units making a word (e.g., letters) and not necessarily from the interplay between a word and its context as proposed by Piantadosi and co-workers. In itself, the linear relation does not entail the results of any optimization process.
Random numbers certified by Bell's theorem.
Pironio, S; Acín, A; Massar, S; de la Giroday, A Boyer; Matsukevich, D N; Maunz, P; Olmschenk, S; Hayes, D; Luo, L; Manning, T A; Monroe, C
2010-04-15
Randomness is a fundamental feature of nature and a valuable resource for applications ranging from cryptography and gambling to numerical simulation of physical and biological systems. Random numbers, however, are difficult to characterize mathematically, and their generation must rely on an unpredictable physical process. Inaccuracies in the theoretical modelling of such processes or failures of the devices, possibly due to adversarial attacks, limit the reliability of random number generators in ways that are difficult to control and detect. Here, inspired by earlier work on non-locality-based and device-independent quantum information processing, we show that the non-local correlations of entangled quantum particles can be used to certify the presence of genuine randomness. It is thereby possible to design a cryptographically secure random number generator that does not require any assumption about the internal working of the device. Such a strong form of randomness generation is impossible classically and possible in quantum systems only if certified by a Bell inequality violation. We carry out a proof-of-concept demonstration of this proposal in a system of two entangled atoms separated by approximately one metre. The observed Bell inequality violation, featuring near perfect detection efficiency, guarantees that 42 new random numbers are generated with 99 per cent confidence. Our results lay the groundwork for future device-independent quantum information experiments and for addressing fundamental issues raised by the intrinsic randomness of quantum theory.
Diversity of Poissonian populations.
Eliazar, Iddo I; Sokolov, Igor M
2010-01-01
Populations represented by collections of points scattered randomly on the real line are ubiquitous in science and engineering. The statistical modeling of such populations leads naturally to Poissonian populations-Poisson processes on the real line with a distinguished maximal point. Poissonian populations are infinite objects underlying key issues in statistical physics, probability theory, and random fractals. Due to their infiniteness, measuring the diversity of Poissonian populations depends on the lower-bound cut-off applied. This research characterizes the classes of Poissonian populations whose diversities are invariant with respect to the cut-off level applied and establishes an elemental connection between these classes and extreme-value theory. The measures of diversity considered are variance and dispersion, Simpson's index and inverse participation ratio, Shannon's entropy and Rényi's entropy, and Gini's index.
NASA Astrophysics Data System (ADS)
Frič, Roman; Papčo, Martin
2017-12-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
Mind-sets matter: a meta-analytic review of implicit theories and self-regulation.
Burnette, Jeni L; O'Boyle, Ernest H; VanEpps, Eric M; Pollack, Jeffrey M; Finkel, Eli J
2013-05-01
This review builds on self-control theory (Carver & Scheier, 1998) to develop a theoretical framework for investigating associations of implicit theories with self-regulation. This framework conceptualizes self-regulation in terms of 3 crucial processes: goal setting, goal operating, and goal monitoring. In this meta-analysis, we included articles that reported a quantifiable assessment of implicit theories and at least 1 self-regulatory process or outcome. With a random effects approach used, meta-analytic results (total unique N = 28,217; k = 113) across diverse achievement domains (68% academic) and populations (age range = 5-42; 10 different nationalities; 58% from United States; 44% female) demonstrated that implicit theories predict distinct self-regulatory processes, which, in turn, predict goal achievement. Incremental theories, which, in contrast to entity theories, are characterized by the belief that human attributes are malleable rather than fixed, significantly predicted goal setting (performance goals, r = -.151; learning goals, r = .187), goal operating (helpless-oriented strategies, r = -.238; mastery-oriented strategies, r = .227), and goal monitoring (negative emotions, r = -.233; expectations, r = .157). The effects for goal setting and goal operating were stronger in the presence (vs. absence) of ego threats such as failure feedback. Discussion emphasizes how the present theoretical analysis merges an implicit theory perspective with self-control theory to advance scholarship and unlock major new directions for basic and applied research.
Anomalous transport and stochastic processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balescu, R.
1996-03-01
The relation between kinetic transport theory and theory of stochastic processes is reviewed. The Langevin equation formalism provides important, but rather limited information about diffusive processes. A quite promising new approach to modeling complex situations, such as transport in incompletely destroyed magnetic surfaces, is provided by the theory of Continuous Time Random Walks (CTRW), which is presented in some detail. An academic test problem is discussed in great detail: transport of particles in a fluctuating magnetic field, in the limit of infinite perpendicular correlation length. The well-known subdiffusive behavior of the Mean Square Displacement (MSD), proportional to t{sup 1/2}, ismore » recovered by a CTRW, but the complete density profile is not. However, the quasilinear approximation of the kinetic equation has the form of a non-Markovian diffusion equation and can thus be generated by a CTRW. 16 refs., 3 figs.« less
Optimum systems design with random input and output applied to solar water heating
NASA Astrophysics Data System (ADS)
Abdel-Malek, L. L.
1980-03-01
Solar water heating systems are evaluated. Models were developed to estimate the percentage of energy supplied from the Sun to a household. Since solar water heating systems have random input and output queueing theory, birth and death processes were the major tools in developing the models of evaluation. Microeconomics methods help in determining the optimum size of the solar water heating system design parameters, i.e., the water tank volume and the collector area.
Surprisingly rational: probability theory plus noise explains biases in judgment.
Costello, Fintan; Watts, Paul
2014-07-01
The systematic biases seen in people's probability judgments are typically taken as evidence that people do not use the rules of probability theory when reasoning about probability but instead use heuristics, which sometimes yield reasonable judgments and sometimes yield systematic biases. This view has had a major impact in economics, law, medicine, and other fields; indeed, the idea that people cannot reason with probabilities has become a truism. We present a simple alternative to this view, where people reason about probability according to probability theory but are subject to random variation or noise in the reasoning process. In this account the effect of noise is canceled for some probabilistic expressions. Analyzing data from 2 experiments, we find that, for these expressions, people's probability judgments are strikingly close to those required by probability theory. For other expressions, this account produces systematic deviations in probability estimates. These deviations explain 4 reliable biases in human probabilistic reasoning (conservatism, subadditivity, conjunction, and disjunction fallacies). These results suggest that people's probability judgments embody the rules of probability theory and that biases in those judgments are due to the effects of random noise. (c) 2014 APA, all rights reserved.
A cluster randomized theory-guided oral hygiene trial in adolescents-A latent growth model.
Aleksejūnienė, J; Brukienė, V
2018-05-01
(i) To test whether theory-guided interventions are more effective than conventional dental instruction (CDI) for changing oral hygiene in adolescents and (ii) to examine whether such interventions equally benefit both genders and different socio-economic (SES) groups. A total of 244 adolescents were recruited from three schools, and cluster randomization allocated adolescents to one of the three types of interventions: two were theory-based interventions (Precaution Adoption Process Model or Authoritative Parenting Model) and CDI served as an active control. Oral hygiene levels % (OH) were assessed at baseline, after 3 months and after 12 months. A complete data set was available for 166 adolescents (the total follow-up rate: 69%). There were no significant differences in baseline OH between those who participated throughout the study and those who dropped out. Bivariate and multivariate analyses showed that theory-guided interventions produced significant improvements in oral hygiene and that there were no significant gender or socio-economic differences. Theory-guided interventions produced more positive changes in OH than CDI, and these changes did not differ between gender and SES groups. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory
NASA Astrophysics Data System (ADS)
Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick
2018-05-01
For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.
Explore Stochastic Instabilities of Periodic Points by Transition Path Theory
NASA Astrophysics Data System (ADS)
Cao, Yu; Lin, Ling; Zhou, Xiang
2016-06-01
We consider the noise-induced transitions from a linearly stable periodic orbit consisting of T periodic points in randomly perturbed discrete logistic map. Traditional large deviation theory and asymptotic analysis at small noise limit cannot distinguish the quantitative difference in noise-induced stochastic instabilities among the T periodic points. To attack this problem, we generalize the transition path theory to the discrete-time continuous-space stochastic process. In our first criterion to quantify the relative instability among T periodic points, we use the distribution of the last passage location related to the transitions from the whole periodic orbit to a prescribed disjoint set. This distribution is related to individual contributions to the transition rate from each periodic points. The second criterion is based on the competency of the transition paths associated with each periodic point. Both criteria utilize the reactive probability current in the transition path theory. Our numerical results for the logistic map reveal the transition mechanism of escaping from the stable periodic orbit and identify which periodic point is more prone to lose stability so as to make successful transitions under random perturbations.
Mean first-passage times of non-Markovian random walkers in confinement.
Guérin, T; Levernier, N; Bénichou, O; Voituriez, R
2016-06-16
The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.
Mean first-passage times of non-Markovian random walkers in confinement
NASA Astrophysics Data System (ADS)
Guérin, T.; Levernier, N.; Bénichou, O.; Voituriez, R.
2016-06-01
The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2004-04-01
Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.
Sato, Aki-Hiro
2004-04-01
Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.
Variations on a theme of Lander and Waterman
DOE Office of Scientific and Technical Information (OSTI.GOV)
Speed, T.
1997-12-01
The original Lander and Waterman mathematical analysis was for fingerprinting random clones. Since that time, a number of variants of their theory have appeared, including ones which apply to mapping by anchoring random clones, and to non-random or directed clone mapping. The same theory is now widely used to devise random sequencing strategies. In this talk I will review these developments, and go on the discuss the theory required for directed sequencing strategies.
Matched metal die compression molded structural random fiber sheet molding compound flywheel
Kulkarni, Satish V.; Christensen, Richard M.; Toland, Richard H.
1985-01-01
A flywheel (10) is described that is useful for energy storage in a hybrid vehicle automotive power system or in some stationary applications. The flywheel (10) has a body of essentially planar isotropic high strength structural random fiber sheet molding compound (SMC-R). The flywheel (10) may be economically produced by a matched metal die compression molding process. The flywheel (10) makes energy intensive efficient use of a fiber/resin composite while having a shape designed by theory assuming planar isotropy.
Kulkarni, S.V.; Christensen, R.M.; Toland, R.H.
1980-09-24
A flywheel is described that is useful for energy storage in a hybrid vehicle automotive power system or in some stationary applications. The flywheel has a body of essentially planar isotropic high strength structural random fiber sheet molding compound (SMC-R). The flywheel may be economically produced by a matched metal die compression molding process. The flywheel makes energy intensive efficient use of a fiber/resin composite while having a shape designed by theory assuming planar isotropy.
Direct generation of all-optical random numbers from optical pulse amplitude chaos.
Li, Pu; Wang, Yun-Cai; Wang, An-Bang; Yang, Ling-Zhen; Zhang, Ming-Jiang; Zhang, Jian-Zhong
2012-02-13
We propose and theoretically demonstrate an all-optical method for directly generating all-optical random numbers from pulse amplitude chaos produced by a mode-locked fiber ring laser. Under an appropriate pump intensity, the mode-locked laser can experience a quasi-periodic route to chaos. Such a chaos consists of a stream of pulses with a fixed repetition frequency but random intensities. In this method, we do not require sampling procedure and external triggered clocks but directly quantize the chaotic pulses stream into random number sequence via an all-optical flip-flop. Moreover, our simulation results show that the pulse amplitude chaos has no periodicity and possesses a highly symmetric distribution of amplitude. Thus, in theory, the obtained random number sequence without post-processing has a high-quality randomness verified by industry-standard statistical tests.
How Fast Can Networks Synchronize? A Random Matrix Theory Approach
NASA Astrophysics Data System (ADS)
Timme, Marc; Wolf, Fred; Geisel, Theo
2004-03-01
Pulse-coupled oscillators constitute a paradigmatic class of dynamical systems interacting on networks because they model a variety of biological systems including flashing fireflies and chirping crickets as well as pacemaker cells of the heart and neural networks. Synchronization is one of the most simple and most prevailing kinds of collective dynamics on such networks. Here we study collective synchronization [1] of pulse-coupled oscillators interacting on asymmetric random networks. Using random matrix theory we analytically determine the speed of synchronization in such networks in dependence on the dynamical and network parameters [2]. The speed of synchronization increases with increasing coupling strengths. Surprisingly, however, it stays finite even for infinitely strong interactions. The results indicate that the speed of synchronization is limited by the connectivity of the network. We discuss the relevance of our findings to general equilibration processes on complex networks. [5mm] [1] M. Timme, F. Wolf, T. Geisel, Phys. Rev. Lett. 89:258701 (2002). [2] M. Timme, F. Wolf, T. Geisel, cond-mat/0306512 (2003).
Random walk, diffusion and mixing in simulations of scalar transport in fluid flows
NASA Astrophysics Data System (ADS)
Klimenko, A. Y.
2008-12-01
Physical similarity and mathematical equivalence of continuous diffusion and particle random walk form one of the cornerstones of modern physics and the theory of stochastic processes. In many applied models used in simulation of turbulent transport and turbulent combustion, mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. We show that the continuous scalar transport and diffusion can be accurately specified by means of mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. This gives an alternative formulation for the stochastic process which is selected to represent the continuous diffusion. This paper focuses on statistical errors and deals with relatively simple cases, where one-particle distributions are sufficient for a complete description of the problem.
Schools and Drug Markets: Examining the Relationship between Schools and Neighborhood Drug Crime
ERIC Educational Resources Information Center
Willits, Dale; Broidy, Lisa M.; Denman, Kristine
2015-01-01
Research on drug markets indicates that they are not randomly distributed. Instead they are concentrated around specific types of places. Theoretical and empirical literature implicates routine activities and social disorganization processes in this distribution. In the current study, we examine whether, consistent with these theories, drug…
Structure de l'univers - quand l'observation guide la théorie... ou pas
NASA Astrophysics Data System (ADS)
Nazé, Yaël
The scientific method is often presented, e.g. to children, as a linear process, starting by a question and ending by the elaboration of a theory, with a few experiments in-between. The reality of the building of science is much more complex, with back-and-forth motions between theories and observations, with some intervention of technology and randomness. This complex process is not always correctly understood and assimilated, even amongst scientists. The hero cult, mixed with some revisionism, still exists despite in-depth historical studies. In this context, it may be useful to comparatively examine the reaction to crucial observations, their interpretation and their impact on the contemporaneous theory development. Four examples are presented here, all linked to the question of the 'construction of the heavens' but at different epochs.
Cavity master equation for the continuous time dynamics of discrete-spin models.
Aurell, E; Del Ferraro, G; Domínguez, E; Mulet, R
2017-05-01
We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.
Cavity master equation for the continuous time dynamics of discrete-spin models
NASA Astrophysics Data System (ADS)
Aurell, E.; Del Ferraro, G.; Domínguez, E.; Mulet, R.
2017-05-01
We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.
What Randomized Benchmarking Actually Measures
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...
2017-09-28
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
Modeling missing data in knowledge space theory.
de Chiusole, Debora; Stefanutti, Luca; Anselmi, Pasquale; Robusto, Egidio
2015-12-01
Missing data are a well known issue in statistical inference, because some responses may be missing, even when data are collected carefully. The problem that arises in these cases is how to deal with missing data. In this article, the missingness is analyzed in knowledge space theory, and in particular when the basic local independence model (BLIM) is applied to the data. Two extensions of the BLIM to missing data are proposed: The former, called ignorable missing BLIM (IMBLIM), assumes that missing data are missing completely at random; the latter, called missing BLIM (MissBLIM), introduces specific dependencies of the missing data on the knowledge states, thus assuming that the missing data are missing not at random. The IMBLIM and the MissBLIM modeled the missingness in a satisfactory way, in both a simulation study and an empirical application, depending on the process that generates the missingness: If the missing data-generating process is of type missing completely at random, then either IMBLIM or MissBLIM provide adequate fit to the data. However, if the pattern of missingness is functionally dependent upon unobservable features of the data (e.g., missing answers are more likely to be wrong), then only a correctly specified model of the missingness distribution provides an adequate fit to the data. (c) 2015 APA, all rights reserved).
On the theory of Brownian motion with the Alder-Wainwright effect
NASA Astrophysics Data System (ADS)
Okabe, Yasunori
1986-12-01
The Stokes-Boussinesq-Langevin equation, which describes the time evolution of Brownian motion with the Alder-Wainwright effect, can be treated in the framework of the theory of KMO-Langevin equations which describe the time evolution of a real, stationary Gaussian process with T-positivity (reflection positivity) originating in axiomatic quantum field theory. After proving the fluctuation-dissipation theorems for KMO-Langevin equations, we obtain an explicit formula for the deviation from the classical Einstein relation that occurs in the Stokes-Boussinesq-Langevin equation with a white noise as its random force. We are interested in whether or not it can be measured experimentally.
Fractal Signals & Space-Time Cartoons
NASA Astrophysics Data System (ADS)
Oetama, H. C. Jakob; Maksoed, W. H.
2016-03-01
In ``Theory of Scale Relativity'', 1991- L. Nottale states whereas ``scale relativity is a geometrical & fractal space-time theory''. It took in comparisons to ``a unified, wavelet based framework for efficiently synthetizing, analyzing ∖7 processing several broad classes of fractal signals''-Gregory W. Wornell:``Signal Processing with Fractals'', 1995. Furthers, in Fig 1.1. a simple waveform from statistically scale-invariant random process [ibid.,h 3 ]. Accompanying RLE Technical Report 566 ``Synthesis, Analysis & Processing of Fractal Signals'' as well as from Wornell, Oct 1991 herewith intended to deducts =a Δt + (1 - β Δ t) ...in Petersen, et.al: ``Scale invariant properties of public debt growth'',2010 h. 38006p2 to [1/{1- (2 α (λ) /3 π) ln (λ/r)}depicts in Laurent Nottale,1991, h 24. Acknowledgment devotes to theLates HE. Mr. BrigadierGeneral-TNI[rtd].Prof. Ir. HANDOJO.
Diffusion in randomly perturbed dissipative dynamics
NASA Astrophysics Data System (ADS)
Rodrigues, Christian S.; Chechkin, Aleksei V.; de Moura, Alessandro P. S.; Grebogi, Celso; Klages, Rainer
2014-11-01
Dynamical systems having many coexisting attractors present interesting properties from both fundamental theoretical and modelling points of view. When such dynamics is under bounded random perturbations, the basins of attraction are no longer invariant and there is the possibility of transport among them. Here we introduce a basic theoretical setting which enables us to study this hopping process from the perspective of anomalous transport using the concept of a random dynamical system with holes. We apply it to a simple model by investigating the role of hyperbolicity for the transport among basins. We show numerically that our system exhibits non-Gaussian position distributions, power-law escape times, and subdiffusion. Our simulation results are reproduced consistently from stochastic continuous time random walk theory.
Statistical theory of correlations in random packings of hard particles.
Jin, Yuliang; Puckett, James G; Makse, Hernán A
2014-05-01
A random packing of hard particles represents a fundamental model for granular matter. Despite its importance, analytical modeling of random packings remains difficult due to the existence of strong correlations which preclude the development of a simple theory. Here, we take inspiration from liquid theories for the n-particle angular correlation function to develop a formalism of random packings of hard particles from the bottom up. A progressive expansion into a shell of particles converges in the large layer limit under a Kirkwood-like approximation of higher-order correlations. We apply the formalism to hard disks and predict the density of two-dimensional random close packing (RCP), ϕ(rcp) = 0.85 ± 0.01, and random loose packing (RLP), ϕ(rlp) = 0.67 ± 0.01. Our theory also predicts a phase diagram and angular correlation functions that are in good agreement with experimental and numerical data.
Random walks and diffusion on networks
NASA Astrophysics Data System (ADS)
Masuda, Naoki; Porter, Mason A.; Lambiotte, Renaud
2017-11-01
Random walks are ubiquitous in the sciences, and they are interesting from both theoretical and practical perspectives. They are one of the most fundamental types of stochastic processes; can be used to model numerous phenomena, including diffusion, interactions, and opinions among humans and animals; and can be used to extract information about important entities or dense groups of entities in a network. Random walks have been studied for many decades on both regular lattices and (especially in the last couple of decades) on networks with a variety of structures. In the present article, we survey the theory and applications of random walks on networks, restricting ourselves to simple cases of single and non-adaptive random walkers. We distinguish three main types of random walks: discrete-time random walks, node-centric continuous-time random walks, and edge-centric continuous-time random walks. We first briefly survey random walks on a line, and then we consider random walks on various types of networks. We extensively discuss applications of random walks, including ranking of nodes (e.g., PageRank), community detection, respondent-driven sampling, and opinion models such as voter models.
The explicit form of the rate function for semi-Markov processes and its contractions
NASA Astrophysics Data System (ADS)
Sughiyama, Yuki; Kobayashi, Testuya J.
2018-03-01
We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.
2014-09-01
optimal diagonal loading which minimizes the MSE. The be- havior of optimal diagonal loading when the arrival process is composed of plane waves embedded...observation vectors. The examples of the ensemble correlation matrix corresponding to the input process consisting of a single or multiple plane waves...Y ∗ij is a complex-conjugate of Yij. This result is used in order to evaluate the expectations of different quadratic forms. The Poincare -Nash
Efficient Quantum Pseudorandomness.
Brandão, Fernando G S L; Harrow, Aram W; Horodecki, Michał
2016-04-29
Randomness is both a useful way to model natural systems and a useful tool for engineered systems, e.g., in computation, communication, and control. Fully random transformations require exponential time for either classical or quantum systems, but in many cases pseudorandom operations can emulate certain properties of truly random ones. Indeed, in the classical realm there is by now a well-developed theory regarding such pseudorandom operations. However, the construction of such objects turns out to be much harder in the quantum case. Here, we show that random quantum unitary time evolutions ("circuits") are a powerful source of quantum pseudorandomness. This gives for the first time a polynomial-time construction of quantum unitary designs, which can replace fully random operations in most applications, and shows that generic quantum dynamics cannot be distinguished from truly random processes. We discuss applications of our result to quantum information science, cryptography, and understanding the self-equilibration of closed quantum dynamics.
NASA Astrophysics Data System (ADS)
Rogotis, Savvas; Palaskas, Christos; Ioannidis, Dimosthenis; Tzovaras, Dimitrios; Likothanassis, Spiros
2015-11-01
This work aims to present an extended framework for automatically recognizing suspicious activities in outdoor perimeter surveilling systems based on infrared video processing. By combining size-, speed-, and appearance-based features, like the local phase quantization and the histograms of oriented gradients, actions of small duration are recognized and used as input, along with spatial information, for modeling target activities using the theory of hidden conditional random fields (HCRFs). HCRFs are used to classify an observation sequence into the most appropriate activity label class, thus discriminating high-risk activities like trespassing from zero risk activities, such as loitering outside the perimeter. The effectiveness of this approach is demonstrated with experimental results in various scenarios that represent suspicious activities in perimeter surveillance systems.
NASA Technical Reports Server (NTRS)
Cairns, Iver H.; Robinson, P. A.
1998-01-01
Existing, competing theories for coronal and interplanetary type III solar radio bursts appeal to one or more of modulational instability, electrostatic (ES) decay processes, or stochastic growth physics to preserve the electron beam, limit the levels of Langmuir-like waves driven by the beam, and produce wave spectra capable of coupling nonlinearly to generate the observed radio emission. Theoretical constraints exist on the wavenumbers and relative sizes of the wave bandwidth and nonlinear growth rate for which Langmuir waves are subject to modulational instability and the parametric and random phase versions of ES decay. A constraint also exists on whether stochastic growth theory (SGT) is appropriate. These constraints are evaluated here using the beam, plasma, and wave properties (1) observed in specific interplanetary type III sources, (2) predicted nominally for the corona, and (3) predicted at heliocentric distances greater than a few solar radii by power-law models based on interplanetary observations. It is found that the Langmuir waves driven directly by the beam have wavenumbers that are almost always too large for modulational instability but are appropriate to ES decay. Even for waves scattered to lower wavenumbers (by ES decay, for instance), the wave bandwidths are predicted to be too large and the nonlinear growth rates too small for modulational instability to occur for the specific interplanetary events studied or the great majority of Langmuir wave packets in type III sources at arbitrary heliocentric distances. Possible exceptions are for very rare, unusually intense, narrowband wave packets, predominantly close to the Sun, and for the front portion of very fast beams traveling through unusually dilute, cold solar wind plasmas. Similar arguments demonstrate that the ES decay should proceed almost always as a random phase process rather than a parametric process, with similar exceptions. These results imply that it is extremely rare for modulational instability or parametric decay to proceed in type III sources at any heliocentric distance: theories for type III bursts based on modulational instability or parametric decay are therefore not viable in general. In contrast, the constraint on SGT can be satisfied and random phase ES decay can proceed at all heliocentric distances under almost all circumstances. (The contrary circumstances involve unusually slow, broad beams moving through unusually hot regions of the Corona.) The analyses presented here strongly justify extending the existing SGT-based model for interplanetary type III bursts (which includes SGT physics, random phase ES decay, and specific electromagnetic emission mechanisms) into a general theory for type III bursts from the corona to beyond 1 AU. This extended theory enjoys strong theoretical support, explains the characteristics of specific interplanetary type III bursts very well, and can account for the detailed dynamic spectra of type III bursts from the lower corona and solar wind.
NASA Technical Reports Server (NTRS)
Cairns, I. H.
1984-01-01
Observations of low frequency ion acoustic-like waves associated with Langmuir waves present during interplanetary Type 3 bursts are used to study plasma emission mechanisms and wave processes involving ion acoustic waves. It is shown that the observed wave frequency characteristics are consistent with the processes L yields T + S (where L = Langmuir waves, T = electromagnetic waves, S = ion acoustic waves) and L yields L' + S proceeding. The usual incoherent (random phase) version of the process L yields T + S cannot explain the observed wave production time scale. The clumpy nature of the observed Langmuir waves is vital to the theory of IP Type 3 bursts. The incoherent process L yields T + S may encounter difficulties explaining the observed Type 3 brightness temperatures when Langmuir wave clumps are incorporated into the theory. The parametric process L yields T + S may be the important emission process for the fundamental radiation of interplanetary Type 3 bursts.
Many States Include Evolution Questions on Assessments
ERIC Educational Resources Information Center
Cavanagh, Sean
2005-01-01
The theory of evolution, pioneered most famously by Charles Darwin, posits that humans and other living creatures have descended from common ancestors over time through a process of random mutation and natural selection. It is widely considered to be a pillar of modern biology. Over the past year, however, public education has been roiled by…
Spreading Effect in Industrial Complex Network Based on Revised Structural Holes Theory
Ye, Qing; Guan, Jun
2016-01-01
This paper analyzed the spreading effect of industrial sectors with complex network model under perspective of econophysics. Input-output analysis, as an important research tool, focuses more on static analysis. However, the fundamental aim of industry analysis is to figure out how interaction between different industries makes impacts on economic development, which turns out to be a dynamic process. Thus, industrial complex network based on input-output tables from WIOD is proposed to be a bridge connecting accurate static quantitative analysis and comparable dynamic one. With application of revised structural holes theory, flow betweenness and random walk centrality were respectively chosen to evaluate industrial sectors’ long-term and short-term spreading effect process in this paper. It shows that industries with higher flow betweenness or random walk centrality would bring about more intensive industrial spreading effect to the industrial chains they stands in, because value stream transmission of industrial sectors depends on how many products or services it can get from the other ones, and they are regarded as brokers with bigger information superiority and more intermediate interests. PMID:27218468
Spreading Effect in Industrial Complex Network Based on Revised Structural Holes Theory.
Xing, Lizhi; Ye, Qing; Guan, Jun
2016-01-01
This paper analyzed the spreading effect of industrial sectors with complex network model under perspective of econophysics. Input-output analysis, as an important research tool, focuses more on static analysis. However, the fundamental aim of industry analysis is to figure out how interaction between different industries makes impacts on economic development, which turns out to be a dynamic process. Thus, industrial complex network based on input-output tables from WIOD is proposed to be a bridge connecting accurate static quantitative analysis and comparable dynamic one. With application of revised structural holes theory, flow betweenness and random walk centrality were respectively chosen to evaluate industrial sectors' long-term and short-term spreading effect process in this paper. It shows that industries with higher flow betweenness or random walk centrality would bring about more intensive industrial spreading effect to the industrial chains they stands in, because value stream transmission of industrial sectors depends on how many products or services it can get from the other ones, and they are regarded as brokers with bigger information superiority and more intermediate interests.
Rigorously testing multialternative decision field theory against random utility models.
Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg
2014-06-01
Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Supernatural believers attribute more intentions to random movement than skeptics: an fMRI study.
Riekki, Tapani; Lindeman, Marjaana; Raij, Tuukka T
2014-01-01
A host of research has attempted to explain why some believe in the supernatural and some do not. One suggested explanation for commonly held supernatural beliefs is that they are a by-product of theory of mind (ToM) processing. However, this does not explain why skeptics with intact ToM processes do not believe. We employed fMRI to investigate activation differences in ToM-related brain circuitries between supernatural believers (N = 12) and skeptics (N = 11) while they watched 2D animations of geometric objects moving intentionally or randomly and rated the intentionality of the animations. The ToM-related circuitries in the medial prefrontal cortex (mPFC) were localized by contrasting intention-rating-related and control-rating-related brain activation. Compared with the skeptics, the supernatural believers rated the random movements as more intentional and had stronger activation of the ToM-related circuitries during the animation with random movement. The strength of the ToM-related activation covaried with the intentionality ratings. These findings provide evidence that differences in ToM-related activations are associated with supernatural believers' tendency to interpret random phenomena in mental terms. Thus, differences in ToM processing may contribute to differences between believing and unbelieving.
Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory
2016-05-12
valued times series from a sample. (A practical algorithm to compute the estimator is a work in progress.) Third, finitely-valued spatial processes...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics; time series ; Markov chains; random...proved. Second, a statistical method is developed to estimate the memory depth of discrete- time and continuously-valued times series from a sample. (A
The effect of the neural activity on topological properties of growing neural networks.
Gafarov, F M; Gafarova, V R
2016-09-01
The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.
Quantum-like Viewpoint on the Complexity and Randomness of the Financial Market
NASA Astrophysics Data System (ADS)
Choustova, Olga
In economics and financial theory, analysts use random walk and more general martingale techniques to model behavior of asset prices, in particular share prices on stock markets, currency exchange rates and commodity prices. This practice has its basis in the presumption that investors act rationally and without bias, and that at any moment they estimate the value of an asset based on future expectations. Under these conditions, all existing information affects the price, which changes only when new information comes out. By definition, new information appears randomly and influences the asset price randomly. Corresponding continuous time models are based on stochastic processes (this approach was initiated in the thesis of [4]), see, e.g., the books of [33] and [37] for historical and mathematical details.
Plasma fluctuations as Markovian noise.
Li, B; Hazeltine, R D; Gentle, K W
2007-12-01
Noise theory is used to study the correlations of stationary Markovian fluctuations that are homogeneous and isotropic in space. The relaxation of the fluctuations is modeled by the diffusion equation. The spatial correlations of random fluctuations are modeled by the exponential decay. Based on these models, the temporal correlations of random fluctuations, such as the correlation function and the power spectrum, are calculated. We find that the diffusion process can give rise to the decay of the correlation function and a broad frequency spectrum of random fluctuations. We also find that the transport coefficients may be estimated by the correlation length and the correlation time. The theoretical results are compared with the observed plasma density fluctuations from the tokamak and helimak experiments.
Decision-Making in Agent-Based Models of Migration: State of the Art and Challenges.
Klabunde, Anna; Willekens, Frans
We review agent-based models (ABM) of human migration with respect to their decision-making rules. The most prominent behavioural theories used as decision rules are the random utility theory, as implemented in the discrete choice model, and the theory of planned behaviour. We identify the critical choices that must be made in developing an ABM, namely the modelling of decision processes and social networks. We also discuss two challenges that hamper the widespread use of ABM in the study of migration and, more broadly, demography and the social sciences: (a) the choice and the operationalisation of a behavioural theory (decision-making and social interaction) and (b) the selection of empirical evidence to validate the model. We offer advice on how these challenges might be overcome.
Staggered chiral random matrix theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, James C.
2011-02-01
We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.
Intermittency and random matrices
NASA Astrophysics Data System (ADS)
Sokoloff, Dmitry; Illarionov, E. A.
2015-08-01
A spectacular phenomenon of intermittency, i.e. a progressive growth of higher statistical moments of a physical field excited by an instability in a random medium, attracted the attention of Zeldovich in the last years of his life. At that time, the mathematical aspects underlying the physical description of this phenomenon were still under development and relations between various findings in the field remained obscure. Contemporary results from the theory of the product of independent random matrices (the Furstenberg theory) allowed the elaboration of the phenomenon of intermittency in a systematic way. We consider applications of the Furstenberg theory to some problems in cosmology and dynamo theory.
NASA Astrophysics Data System (ADS)
Borri, Claudia; Paggi, Marco
2015-02-01
The random process theory (RPT) has been widely applied to predict the joint probability distribution functions (PDFs) of asperity heights and curvatures of rough surfaces. A check of the predictions of RPT against the actual statistics of numerically generated random fractal surfaces and of real rough surfaces has been only partially undertaken. The present experimental and numerical study provides a deep critical comparison on this matter, providing some insight into the capabilities and limitations in applying RPT and fractal modeling to antireflective and hydrophobic rough surfaces, two important types of textured surfaces. A multi-resolution experimental campaign using a confocal profilometer with different lenses is carried out and a comprehensive software for the statistical description of rough surfaces is developed. It is found that the topology of the analyzed textured surfaces cannot be fully described according to RPT and fractal modeling. The following complexities emerge: (i) the presence of cut-offs or bi-fractality in the power-law power-spectral density (PSD) functions; (ii) a more pronounced shift of the PSD by changing resolution as compared to what was expected from fractal modeling; (iii) inaccuracy of the RPT in describing the joint PDFs of asperity heights and curvatures of textured surfaces; (iv) lack of resolution-invariance of joint PDFs of textured surfaces in case of special surface treatments, not accounted for by fractal modeling.
Demetrius, L
2000-09-07
The science of thermodynamics is concerned with understanding the properties of inanimate matter in so far as they are determined by changes in temperature. The Second Law asserts that in irreversible processes there is a uni-directional increase in thermodynamic entropy, a measure of the degree of uncertainty in the thermal energy state of a randomly chosen particle in the aggregate. The science of evolution is concerned with understanding the properties of populations of living matter in so far as they are regulated by changes in generation time. Directionality theory, a mathematical model of the evolutionary process, establishes that in populations subject to bounded growth constraints, there is a uni-directional increase in evolutionary entropy, a measure of the degree of uncertainty in the age of the immediate ancestor of a randomly chosen newborn. This article reviews the mathematical basis of directionality theory and analyses the relation between directionality theory and statistical thermodynamics. We exploit an analytic relation between temperature, and generation time, to show that the directionality principle for evolutionary entropy is a non-equilibrium extension of the principle of a uni-directional increase of thermodynamic entropy. The analytic relation between these directionality principles is consistent with the hypothesis of the equivalence of fundamental laws as one moves up the hierarchy, from a molecular ensemble where the thermodynamic laws apply, to a population of replicating entities (molecules, cells, higher organisms), where evolutionary principles prevail. Copyright 2000 Academic Press.
Invariance property of wave scattering through disordered media
Pierrat, Romain; Ambichl, Philipp; Gigan, Sylvain; Haber, Alexander; Carminati, Rémi; Rotter, Stefan
2014-01-01
A fundamental insight in the theory of diffusive random walks is that the mean length of trajectories traversing a finite open system is independent of the details of the diffusion process. Instead, the mean trajectory length depends only on the system's boundary geometry and is thus unaffected by the value of the mean free path. Here we show that this result is rooted on a much deeper level than that of a random walk, which allows us to extend the reach of this universal invariance property beyond the diffusion approximation. Specifically, we demonstrate that an equivalent invariance relation also holds for the scattering of waves in resonant structures as well as in ballistic, chaotic or in Anderson localized systems. Our work unifies a number of specific observations made in quite diverse fields of science ranging from the movement of ants to nuclear scattering theory. Potential experimental realizations using light fields in disordered media are discussed. PMID:25425671
ERIC Educational Resources Information Center
Smith, David Arthur
2010-01-01
Much recent work in natural language processing treats linguistic analysis as an inference problem over graphs. This development opens up useful connections between machine learning, graph theory, and linguistics. The first part of this dissertation formulates syntactic dependency parsing as a dynamic Markov random field with the novel…
Brownian motion and its descendants according to Schrödinger
NASA Astrophysics Data System (ADS)
Garbaczewski, Piotr; Vigier, Jean-Pierre
1992-08-01
We revisit Schrödinger's original suggestion of the existence of a special class of random processes, which have their origin in the Einstein-Smoluchowski theory of Brownian motion. Our principal goal is to clarify the physical nature of links connecting the realistic Brownian motion with the abstract mathematical formalism of Nelson and Bernstein diffusions.
Robin A. J. Taylor; Daniel A. Herms; Louis R. Iverson
2008-01-01
The dispersal of organisms is rarely random, although diffusion processes can be useful models for movement in approximately homogeneous environments. However, the environments through which all organisms disperse are far from uniform at all scales. The emerald ash borer (EAB), Agrilus planipennis, is obligate on ash (Fraxinus spp...
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2016-02-01
In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.
The coalescent process in models with selection and recombination.
Hudson, R R; Kaplan, N L
1988-11-01
The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.
Bayesian networks and information theory for audio-visual perception modeling.
Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis
2010-09-01
Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.
Naming games in two-dimensional and small-world-connected random geometric networks.
Lu, Qiming; Korniss, G; Szymanski, B K
2008-01-01
We investigate a prototypical agent-based model, the naming game, on two-dimensional random geometric networks. The naming game [Baronchelli, J. Stat. Mech.: Theory Exp. (2006) P06014] is a minimal model, employing local communications that captures the emergence of shared communication schemes (languages) in a population of autonomous semiotic agents. Implementing the naming games with local broadcasts on random geometric graphs, serves as a model for agreement dynamics in large-scale, autonomously operating wireless sensor networks. Further, it captures essential features of the scaling properties of the agreement process for spatially embedded autonomous agents. Among the relevant observables capturing the temporal properties of the agreement process, we investigate the cluster-size distribution and the distribution of the agreement times, both exhibiting dynamic scaling. We also present results for the case when a small density of long-range communication links are added on top of the random geometric graph, resulting in a "small-world"-like network and yielding a significantly reduced time to reach global agreement. We construct a finite-size scaling analysis for the agreement times in this case.
ERIC Educational Resources Information Center
Bernard, Paquito; Carayol, Marion; Gourlan, Mathieu; Boiché, Julie; Romain, Ahmed Jérôme; Bortolon, Catherine; Lareyre, Olivier; Ninot, Gregory
2017-01-01
A meta-analysis of randomized controlled trials (RCTs) has recently showed that theory-based interventions designed to promote physical activity (PA) significantly increased PA behavior. The objective of the present study was to investigate the moderators of the efficacy of these theory-based interventions. Seventy-seven RCTs evaluating…
Qubit dephasing due to low-frequency noise.
NASA Astrophysics Data System (ADS)
Sverdlov, Victor; Rabenstein, Kristian; Averin, Dmitri
2004-03-01
We have numerically investigated the effects of the classical low-frequency noise on the qubit dynamics beyond the standard lowest-order perturbation theory in coupling. Noise is generated as a random process with a correlation function characterized by two parameters, the amplitude v0 and the cut-off frequency 2π/τ. Time evolution of the density matrix was averaged over up to 10^7 noise realizations. Contrary to the relaxation time T_1, which for v_0<ω, where ω is the qubit oscillation frequency, is always given correctly by the ``golden-rule'' expression, the dephasing time deviates from the perturbation-theory result, when (v_0/ω)^2(ωτ) ≥1. In this regime, even for unbiased qubit for which the pure dephasing vanishes in perturbation theory, the dephasing is much larger than it's perturbation-theory value 1/(2 T_1).
Numerical simulation of swept-wing flows
NASA Technical Reports Server (NTRS)
Reed, Helen L.
1991-01-01
Efforts of the last six months to computationally model the transition process characteristics of flow over swept wings are described. Specifically, the crossflow instability and crossflow/Tollmien-Schlichting wave interactions are analyzed through the numerical solution of the full 3D Navier-Stokes equations including unsteadiness, curvature, and sweep. This approach is chosen because of the complexity of the problem and because it appears that linear stability theory is insufficient to explain the discrepancies between different experiments and between theory and experiment. The leading edge region of a swept wing is considered in a 3D spatial simulation with random disturbances as the initial conditions.
Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory
NASA Astrophysics Data System (ADS)
Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.
2016-12-01
In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).
Corral, Álvaro; Garcia-Millan, Rosalba; Font-Clos, Francesc
2016-01-01
The theory of finite-size scaling explains how the singular behavior of thermodynamic quantities in the critical point of a phase transition emerges when the size of the system becomes infinite. Usually, this theory is presented in a phenomenological way. Here, we exactly demonstrate the existence of a finite-size scaling law for the Galton-Watson branching processes when the number of offsprings of each individual follows either a geometric distribution or a generalized geometric distribution. We also derive the corrections to scaling and the limits of validity of the finite-size scaling law away the critical point. A mapping between branching processes and random walks allows us to establish that these results also hold for the latter case, for which the order parameter turns out to be the probability of hitting a distant boundary. PMID:27584596
2007-11-01
Florea, Anne-Laure Jousselme, Éloi Bossé ; DRDC Valcartier TR 2003-319 ; R & D pour la défense Canada – Valcartier ; novembre 2007. Contexte : Pour...12 3.3.2 Imprecise information . . . . . . . . . . . . . . . . . . . . . 13 3.3.3 Uncertain and imprecise information...information proposed by Philippe Smets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Figure 5: The process of information modelling
ERIC Educational Resources Information Center
Supovitz, Jonathan; Sirinides, Philip
2018-01-01
In a randomized controlled trial of a teacher data-use intervention, the Linking Study tested the impacts of a cyclical and collaborative process that linked teachers' data on instructional practice with data on their students' learning. This article describes the theory of the intervention and its roots in the literature as a backdrop for an…
ERIC Educational Resources Information Center
Lippke, Sonia; Schwarzer, Ralf; Ziegelmann, Jochen P.; Scholz, Urte; Schuz, Benjamin
2010-01-01
Health education interventions can be tailored toward stages of change. This strategy is based on theories that predict at which stage which variables are indicative of subsequent behavior change processes. For example, planning is regarded as being effective in intenders. However, rather few studies have tested whether matched interventions are…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
Poisson process stimulation of an excitable membrane cable model.
Goldfinger, M D
1986-01-01
The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505
Population density equations for stochastic processes with memory kernels
NASA Astrophysics Data System (ADS)
Lai, Yi Ming; de Kamps, Marc
2017-06-01
We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.
NASA Technical Reports Server (NTRS)
Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)
2002-01-01
Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.
NASA Astrophysics Data System (ADS)
Zhao, Yan; Stratt, Richard M.
2018-05-01
Surprisingly long-ranged intermolecular correlations begin to appear in isotropic (orientationally disordered) phases of liquid crystal forming molecules when the temperature or density starts to close in on the boundary with the nematic (ordered) phase. Indeed, the presence of slowly relaxing, strongly orientationally correlated, sets of molecules under putatively disordered conditions ("pseudo-nematic domains") has been apparent for some time from light-scattering and optical-Kerr experiments. Still, a fully microscopic characterization of these domains has been lacking. We illustrate in this paper how pseudo-nematic domains can be studied in even relatively small computer simulations by looking for order-parameter tensor fluctuations much larger than one would expect from random matrix theory. To develop this idea, we show that random matrix theory offers an exact description of how the probability distribution for liquid-crystal order parameter tensors converges to its macroscopic-system limit. We then illustrate how domain properties can be inferred from finite-size-induced deviations from these random matrix predictions. A straightforward generalization of time-independent random matrix theory also allows us to prove that the analogous random matrix predictions for the time dependence of the order-parameter tensor are similarly exact in the macroscopic limit, and that relaxation behavior of the domains can be seen in the breakdown of the finite-size scaling required by that random-matrix theory.
2014-01-01
Background The Medical Research Councils’ framework for complex interventions has been criticized for not including theory-driven approaches to evaluation. Although the framework does include broad guidance on the use of theory, it contains little practical guidance for implementers and there have been calls to develop a more comprehensive approach. A prospective, theory-driven process of intervention design and evaluation is required to develop complex healthcare interventions which are more likely to be effective, sustainable and scalable. Methods We propose a theory-driven approach to the design and evaluation of complex interventions by adapting and integrating a programmatic design and evaluation tool, Theory of Change (ToC), into the MRC framework for complex interventions. We provide a guide to what ToC is, how to construct one, and how to integrate its use into research projects seeking to design, implement and evaluate complex interventions using the MRC framework. We test this approach by using ToC within two randomized controlled trials and one non-randomized evaluation of complex interventions. Results Our application of ToC in three research projects has shown that ToC can strengthen key stages of the MRC framework. It can aid the development of interventions by providing a framework for enhanced stakeholder engagement and by explicitly designing an intervention that is embedded in the local context. For the feasibility and piloting stage, ToC enables the systematic identification of knowledge gaps to generate research questions that strengthen intervention design. ToC may improve the evaluation of interventions by providing a comprehensive set of indicators to evaluate all stages of the causal pathway through which an intervention achieves impact, combining evaluations of intervention effectiveness with detailed process evaluations into one theoretical framework. Conclusions Incorporating a ToC approach into the MRC framework holds promise for improving the design and evaluation of complex interventions, thereby increasing the likelihood that the intervention will be ultimately effective, sustainable and scalable. We urge researchers developing and evaluating complex interventions to consider using this approach, to evaluate its usefulness and to build an evidence base to further refine the methodology. Trial registration Clinical trials.gov: NCT02160249 PMID:24996765
De Silva, Mary J; Breuer, Erica; Lee, Lucy; Asher, Laura; Chowdhary, Neerja; Lund, Crick; Patel, Vikram
2014-07-05
The Medical Research Councils' framework for complex interventions has been criticized for not including theory-driven approaches to evaluation. Although the framework does include broad guidance on the use of theory, it contains little practical guidance for implementers and there have been calls to develop a more comprehensive approach. A prospective, theory-driven process of intervention design and evaluation is required to develop complex healthcare interventions which are more likely to be effective, sustainable and scalable. We propose a theory-driven approach to the design and evaluation of complex interventions by adapting and integrating a programmatic design and evaluation tool, Theory of Change (ToC), into the MRC framework for complex interventions. We provide a guide to what ToC is, how to construct one, and how to integrate its use into research projects seeking to design, implement and evaluate complex interventions using the MRC framework. We test this approach by using ToC within two randomized controlled trials and one non-randomized evaluation of complex interventions. Our application of ToC in three research projects has shown that ToC can strengthen key stages of the MRC framework. It can aid the development of interventions by providing a framework for enhanced stakeholder engagement and by explicitly designing an intervention that is embedded in the local context. For the feasibility and piloting stage, ToC enables the systematic identification of knowledge gaps to generate research questions that strengthen intervention design. ToC may improve the evaluation of interventions by providing a comprehensive set of indicators to evaluate all stages of the causal pathway through which an intervention achieves impact, combining evaluations of intervention effectiveness with detailed process evaluations into one theoretical framework. Incorporating a ToC approach into the MRC framework holds promise for improving the design and evaluation of complex interventions, thereby increasing the likelihood that the intervention will be ultimately effective, sustainable and scalable. We urge researchers developing and evaluating complex interventions to consider using this approach, to evaluate its usefulness and to build an evidence base to further refine the methodology. Clinical trials.gov: NCT02160249.
NASA Astrophysics Data System (ADS)
Lacasa, Lucas
2014-09-01
Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein-Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments.
Dieguez, Sebastian; Wagner-Egger, Pascal; Gauvrit, Nicolas
2015-11-01
Belief in conspiracy theories has often been associated with a biased perception of randomness, akin to a nothing-happens-by-accident heuristic. Indeed, a low prior for randomness (i.e., believing that randomness is a priori unlikely) could plausibly explain the tendency to believe that a planned deception lies behind many events, as well as the tendency to perceive meaningful information in scattered and irrelevant details; both of these tendencies are traits diagnostic of conspiracist ideation. In three studies, we investigated this hypothesis and failed to find the predicted association between low prior for randomness and conspiracist ideation, even when randomness was explicitly opposed to malevolent human intervention. Conspiracy believers' and nonbelievers' perceptions of randomness were not only indistinguishable from each other but also accurate compared with the normative view arising from the algorithmic information framework. Thus, the motto "nothing happens by accident," taken at face value, does not explain belief in conspiracy theories. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Korepanov, Alexey
2017-12-01
Let {T : M \\to M} be a nonuniformly expanding dynamical system, such as logistic or intermittent map. Let {v : M \\to R^d} be an observable and {v_n = \\sum_{k=0}^{n-1} v circ T^k} denote the Birkhoff sums. Given a probability measure {μ} on M, we consider v n as a discrete time random process on the probability space {(M, μ)} . In smooth ergodic theory there are various natural choices of {μ} , such as the Lebesgue measure, or the absolutely continuous T-invariant measure. They give rise to different random processes. We investigate relation between such processes. We show that in a large class of measures, it is possible to couple (redefine on a new probability space) every two processes so that they are almost surely close to each other, with explicit estimates of "closeness". The purpose of this work is to close a gap in the proof of the almost sure invariance principle for nonuniformly hyperbolic transformations by Melbourne and Nicol.
Empiric validation of a process for behavior change.
Elliot, Diane L; Goldberg, Linn; MacKinnon, David P; Ranby, Krista W; Kuehl, Kerry S; Moe, Esther L
2016-09-01
Most behavior change trials focus on outcomes rather than deconstructing how those outcomes related to programmatic theoretical underpinnings and intervention components. In this report, the process of change is compared for three evidence-based programs' that shared theories, intervention elements and potential mediating variables. Each investigation was a randomized trial that assessed pre- and post- intervention variables using survey constructs with established reliability. Each also used mediation analyses to define relationships. The findings were combined using a pattern matching approach. Surprisingly, knowledge was a significant mediator in each program (a and b path effects [p<0.01]). Norms, perceived control abilities, and self-monitoring were confirmed in at least two studies (p<0.01 for each). Replication of findings across studies with a common design but varied populations provides a robust validation of the theory and processes of an effective intervention. Combined findings also demonstrate a means to substantiate process aspects and theoretical models to advance understanding of behavior change.
Assessing the significance of pedobarographic signals using random field theory.
Pataky, Todd C
2008-08-07
Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.
Using Combinatorica/Mathematica for Student Projects in Random Graph Theory
ERIC Educational Resources Information Center
Pfaff, Thomas J.; Zaret, Michele
2006-01-01
We give an example of a student project that experimentally explores a topic in random graph theory. We use the "Combinatorica" package in "Mathematica" to estimate the minimum number of edges needed in a random graph to have a 50 percent chance that the graph is connected. We provide the "Mathematica" code and compare it to the known theoretical…
Xia, Xiaodong; Hao, Jia; Wang, Yang; Zhong, Zheng; Weng, George J
2017-05-24
Highly aligned graphene-based nanocomposites are of great interest due to their excellent electrical properties along the aligned direction. Graphene fillers in these composites are not necessarily perfectly aligned, but their orientations are highly confined to a certain angle, [Formula: see text] with 90° giving rise to the randomly oriented state and 0° to the perfectly aligned one. Recent experiments have shown that electrical conductivity and dielectric permittivity of highly aligned graphene-polymer nanocomposites are strongly dependent on this distribution angle, but at present no theory seems to exist to address this issue. In this work we present a new effective-medium theory that is derived from the underlying physical process including the effects of graphene orientation, filler loading, aspect ratio, percolation threshold, interfacial tunneling, and Maxwell-Wagner-Sillars polarization, to determine these two properties. The theory is formulated in the context of preferred orientational average. We highlight this new theory with an application to rGO/epoxy nanocomposites, and demonstrate that the calculated in-plane and out-of-plane conductivity and permittivity are in agreement with the experimental data as the range of graphene orientations changes from the randomly oriented to the highly aligned state. We also show that the percolation thresholds of highly aligned graphene nanocomposites are in general different along the planar and the normal directions, but they converge into a single one when the statistical distribution of graphene fillers is spherically symmetric.
Dynamic speckle - Interferometry of micro-displacements
NASA Astrophysics Data System (ADS)
Vladimirov, A. P.
2012-06-01
The problem of the dynamics of speckles in the image plane of the object, caused by random movements of scattering centers is solved. We consider three cases: 1) during the observation the points move at random, but constant speeds, and 2) the relative displacement of any pair of points is a continuous random process, and 3) the motion of the centers is the sum of a deterministic movement and random displacement. For the cases 1) and 2) the characteristics of temporal and spectral autocorrelation function of the radiation intensity can be used for determining of individually and the average relative displacement of the centers, their dispersion and the relaxation time. For the case 3) is showed that under certain conditions, the optical signal contains a periodic component, the number of periods is proportional to the derivations of the deterministic displacements. The results of experiments conducted to test and application of theory are given.
Emergence of patterns in random processes
NASA Astrophysics Data System (ADS)
Newman, William I.; Turcotte, Donald L.; Malamud, Bruce D.
2012-08-01
Sixty years ago, it was observed that any independent and identically distributed (i.i.d.) random variable would produce a pattern of peak-to-peak sequences with, on average, three events per sequence. This outcome was employed to show that randomness could yield, as a null hypothesis for animal populations, an explanation for their apparent 3-year cycles. We show how we can explicitly obtain a universal distribution of the lengths of peak-to-peak sequences in time series and that this can be employed for long data sets as a test of their i.i.d. character. We illustrate the validity of our analysis utilizing the peak-to-peak statistics of a Gaussian white noise. We also consider the nearest-neighbor cluster statistics of point processes in time. If the time intervals are random, we show that cluster size statistics are identical to the peak-to-peak sequence statistics of time series. In order to study the influence of correlations in a time series, we determine the peak-to-peak sequence statistics for the Langevin equation of kinetic theory leading to Brownian motion. To test our methodology, we consider a variety of applications. Using a global catalog of earthquakes, we obtain the peak-to-peak statistics of earthquake magnitudes and the nearest neighbor interoccurrence time statistics. In both cases, we find good agreement with the i.i.d. theory. We also consider the interval statistics of the Old Faithful geyser in Yellowstone National Park. In this case, we find a significant deviation from the i.i.d. theory which we attribute to antipersistence. We consider the interval statistics using the AL index of geomagnetic substorms. We again find a significant deviation from i.i.d. behavior that we attribute to mild persistence. Finally, we examine the behavior of Standard and Poor's 500 stock index's daily returns from 1928-2011 and show that, while it is close to being i.i.d., there is, again, significant persistence. We expect that there will be many other applications of our methodology both to interoccurrence statistics and to time series.
Applications of a general random-walk theory for confined diffusion.
Calvo-Muñoz, Elisa M; Selvan, Myvizhi Esai; Xiong, Ruichang; Ojha, Madhusudan; Keffer, David J; Nicholson, Donald M; Egami, Takeshi
2011-01-01
A general random walk theory for diffusion in the presence of nanoscale confinement is developed and applied. The random-walk theory contains two parameters describing confinement: a cage size and a cage-to-cage hopping probability. The theory captures the correct nonlinear dependence of the mean square displacement (MSD) on observation time for intermediate times. Because of its simplicity, the theory also requires modest computational requirements and is thus able to simulate systems with very low diffusivities for sufficiently long time to reach the infinite-time-limit regime where the Einstein relation can be used to extract the self-diffusivity. The theory is applied to three practical cases in which the degree of order in confinement varies. The three systems include diffusion of (i) polyatomic molecules in metal organic frameworks, (ii) water in proton exchange membranes, and (iii) liquid and glassy iron. For all three cases, the comparison between theory and the results of molecular dynamics (MD) simulations indicates that the theory can describe the observed diffusion behavior with a small fraction of the computational expense. The confined-random-walk theory fit to the MSDs of very short MD simulations is capable of accurately reproducing the MSDs of much longer MD simulations. Furthermore, the values of the parameter for cage size correspond to the physical dimensions of the systems and the cage-to-cage hopping probability corresponds to the activation barrier for diffusion, indicating that the two parameters in the theory are not simply fitted values but correspond to real properties of the physical system.
Point processes in arbitrary dimension from fermionic gases, random matrix theory, and number theory
NASA Astrophysics Data System (ADS)
Torquato, Salvatore; Scardicchio, A.; Zachary, Chase E.
2008-11-01
It is well known that one can map certain properties of random matrices, fermionic gases, and zeros of the Riemann zeta function to a unique point process on the real line \\mathbb {R} . Here we analytically provide exact generalizations of such a point process in d-dimensional Euclidean space \\mathbb {R}^d for any d, which are special cases of determinantal processes. In particular, we obtain the n-particle correlation functions for any n, which completely specify the point processes in \\mathbb {R}^d . We also demonstrate that spin-polarized fermionic systems in \\mathbb {R}^d have these same n-particle correlation functions in each dimension. The point processes for any d are shown to be hyperuniform, i.e., infinite wavelength density fluctuations vanish, and the structure factor (or power spectrum) S(k) has a non-analytic behavior at the origin given by S(k)~|k| (k \\rightarrow 0 ). The latter result implies that the pair correlation function g2(r) tends to unity for large pair distances with a decay rate that is controlled by the power law 1/rd+1, which is a well-known property of bosonic ground states and more recently has been shown to characterize maximally random jammed sphere packings. We graphically display one-and two-dimensional realizations of the point processes in order to vividly reveal their 'repulsive' nature. Indeed, we show that the point processes can be characterized by an effective 'hard core' diameter that grows like the square root of d. The nearest-neighbor distribution functions for these point processes are also evaluated and rigorously bounded. Among other results, this analysis reveals that the probability of finding a large spherical cavity of radius r in dimension d behaves like a Poisson point process but in dimension d+1, i.e., this probability is given by exp[-κ(d)rd+1] for large r and finite d, where κ(d) is a positive d-dependent constant. We also show that as d increases, the point process behaves effectively like a sphere packing with a coverage fraction of space that is no denser than 1/2d. This coverage fraction has a special significance in the study of sphere packings in high-dimensional Euclidean spaces.
ERIC Educational Resources Information Center
Roberts-Gray, Cindy; Sweitzer, Sara J.; Ranjit, Nalini; Potratz, Christa; Rood, Magdalena; Romo-Palafox, Maria Jose; Byrd-Williams, Courtney E.; Briley, Margaret E.; Hoelscher, Deanna M.
2017-01-01
Background: A cluster-randomized trial at 30 early care and education centers (Intervention = 15, waitlist Control = 15) showed the "Lunch Is in the Bag" intervention increased parents' packing of fruits, vegetables, and whole grains in their preschool children's bag lunches (parent-child dyads = 351 Intervention, 282 Control). Purpose:…
ERIC Educational Resources Information Center
Young, I. Phillip
2005-01-01
This study addresses the screening decisions for a national random sample of high school principals as viewed from the attraction-similarity theory of interpersonal perceptions. Independent variables are the sex of principals, sex of applicants, and the type of focal positions sought by hypothetical job applicants (teacher or counselor). Dependent…
Queuing Theory and Reference Transactions.
ERIC Educational Resources Information Center
Terbille, Charles
1995-01-01
Examines the implications of applying the queuing theory to three different reference situations: (1) random patron arrivals; (2) random durations of transactions; and (3) use of two librarians. Tables and figures represent results from spreadsheet calculations of queues for each reference situation. (JMV)
A two-year follow-up on a program theory of return to work intervention.
Jensen, Anne Grete Claudi
2013-01-01
Validation of a salutogenic theory for return to work (RTW) and an associated program process theory. A longitudinal non-randomized one-year trial study design was used with a two-year follow-up and with comparison to a reference group. Changes in attitudes and active behaviour in the intervention group and at the workplace were supported by cognitive and behavioural approaches. The intervention group included 118 unskilled Danish public employees and privately employed house-cleaners on sick leave due to musculoskeletal and/or common mental illnesses. Significant improvements of work ability index and perceived health (SF36 subgroups) were reported. A significantly higher RTW and a shorter sick leave than in the reference group also emerged. Positive predictors of RTW were keeping the pre-sick-leave job and improving work ability index and physical impairment/role physical. Decline in self-efficacy was a negative predictor. Support for the theory and associated program process theory was found. The intervention seemed to influence RTW and the employees' attitudes, behaviour and health by affecting comprehensibility, meaningfulness and manageability. Sustainable RTW emerged from a synergism of support from the work place and improved personal resources, especially such as concern mental health. The approach is consistent with integrating health promotion in RTW.
Random Matrix Theory and the Anderson Model
NASA Astrophysics Data System (ADS)
Bellissard, Jean
2004-08-01
This paper is devoted to a discussion of possible strategies to prove rigorously the existence of a metal-insulator Anderson transition for the Anderson model in dimension d≥3. The possible criterions used to define such a transition are presented. It is argued that at low disorder the lowest order in perturbation theory is described by a random matrix model. Various simplified versions for which rigorous results have been obtained in the past are discussed. It includes a free probability approach, the Wegner n-orbital model and a class of models proposed by Disertori, Pinson, and Spencer, Comm. Math. Phys. 232:83-124 (2002). At last a recent work by Magnen, Rivasseau, and the author, Markov Process and Related Fields 9:261-278 (2003) is summarized: it gives a toy modeldescribing the lowest order approximation of Anderson model and it is proved that, for d=2, its density of states is given by the semicircle distribution. A short discussion of its extension to d≥3 follows.
Absorption and scattering of light by nonspherical particles. [in atmosphere
NASA Technical Reports Server (NTRS)
Bohren, C. F.
1986-01-01
Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.
A simple approach to nonlinear estimation of physical systems
Christakos, G.
1988-01-01
Recursive algorithms for estimating the states of nonlinear physical systems are developed. This requires some key hypotheses regarding the structure of the underlying processes. Members of this class of random processes have several desirable properties for the nonlinear estimation of random signals. An assumption is made about the form of the estimator, which may then take account of a wide range of applications. Under the above assumption, the estimation algorithm is mathematically suboptimal but effective and computationally attractive. It may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. To link theory with practice, some numerical results for a simulated system are presented, in which the responses from the proposed and the extended Kalman algorithms are compared. ?? 1988.
Hierarchical random cellular neural networks for system-level brain-like signal processing.
Kozma, Robert; Puljic, Marko
2013-09-01
Sensory information processing and cognition in brains are modeled using dynamic systems theory. The brain's dynamic state is described by a trajectory evolving in a high-dimensional state space. We introduce a hierarchy of random cellular automata as the mathematical tools to describe the spatio-temporal dynamics of the cortex. The corresponding brain model is called neuropercolation which has distinct advantages compared to traditional models using differential equations, especially in describing spatio-temporal discontinuities in the form of phase transitions. Phase transitions demarcate singularities in brain operations at critical conditions, which are viewed as hallmarks of higher cognition and awareness experience. The introduced Monte-Carlo simulations obtained by parallel computing point to the importance of computer implementations using very large-scale integration (VLSI) and analog platforms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Can An Evolutionary Process Create English Text?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.
Critics of the conventional theory of biological evolution have asserted that while natural processes might result in some limited diversity, nothing fundamentally new can arise from 'random' evolution. In response, biologists such as Richard Dawkins have demonstrated that a computer program can generate a specific short phrase via evolution-like iterations starting with random gibberish. While such demonstrations are intriguing, they are flawed in that they have a fixed, pre-specified future target, whereas in real biological evolution there is no fixed future target, but only a complicated 'fitness landscape'. In this study, a significantly more sophisticated evolutionary scheme is employed tomore » produce text segments reminiscent of a Charles Dickens novel. The aggregate size of these segments is larger than the computer program and the input Dickens text, even when comparing compressed data (as a measure of information content).« less
Cosmic ray sources, acceleration and propagation
NASA Technical Reports Server (NTRS)
Ptuskin, V. S.
1986-01-01
A review is given of selected papers on the theory of cosmic ray (CR) propagation and acceleration. The high isotropy and a comparatively large age of galactic CR are explained by the effective interaction of relativistic particles with random and regular electromagnetic fields in interstellar medium. The kinetic theory of CR propagation in the Galaxy is formulated similarly to the elaborate theory of CR propagation in heliosphere. The substantial difference between these theories is explained by the necessity to take into account in some cases the collective effects due to a rather high density of relativisitc particles. In particular, the kinetic CR stream instability and the hydrodynamic Parker instability is studied. The interaction of relativistic particles with an ensemble of given weak random magnetic fields is calculated by perturbation theory. The theory of CR transfer is considered to be basically completed for this case. The main problem consists in poor information about the structure of the regular and the random galactic magnetic fields. An account is given of CR transfer in a turbulent medium.
Williams, Geoffrey C; McGregor, Holly A; Sharp, Daryl; Levesque, Chantal; Kouides, Ruth W; Ryan, Richard M; Deci, Edward L
2006-01-01
A longitudinal randomized trial tested the self-determination theory (SDT) intervention and process model of health behavior change for tobacco cessation (N = 1006). Adult smokers were recruited for a study of smokers' health and were assigned to intensive treatment or community care. Participants were relatively poor and undereducated. Intervention patients perceived greater autonomy support and reported greater autonomous and competence motivations than did control patients. They also reported greater medication use and significantly greater abstinence. Structural equation modeling analyses confirmed the SDT process model in which perceived autonomy support led to increases in autonomous and competence motivations, which in turn led to greater cessation. The causal role of autonomy support in the internalization of autonomous motivation, perceived competence, and smoking cessation was supported. Copyright 2006 APA, all rights reserved.
In Darwinian evolution, feedback from natural selection leads to biased mutations.
Caporale, Lynn Helena; Doyle, John
2013-12-01
Natural selection provides feedback through which information about the environment and its recurring challenges is captured, inherited, and accumulated within genomes in the form of variations that contribute to survival. The variation upon which natural selection acts is generally described as "random." Yet evidence has been mounting for decades, from such phenomena as mutation hotspots, horizontal gene transfer, and highly mutable repetitive sequences, that variation is far from the simplifying idealization of random processes as white (uniform in space and time and independent of the environment or context). This paper focuses on what is known about the generation and control of mutational variation, emphasizing that it is not uniform across the genome or in time, not unstructured with respect to survival, and is neither memoryless nor independent of the (also far from white) environment. We suggest that, as opposed to frequentist methods, Bayesian analysis could capture the evolution of nonuniform probabilities of distinct classes of mutation, and argue not only that the locations, styles, and timing of real mutations are not correctly modeled as generated by a white noise random process, but that such a process would be inconsistent with evolutionary theory. © 2013 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Zausner, Tobi
Chaos theory may provide models for creativity and for the personality of the artist. A collection of speculative hypotheses examines the connection between art and such fundamentals of non-linear dynamics as iteration, dissipative processes, open systems, entropy, sensitivity to stimuli, autocatalysis, subsystems, bifurcations, randomness, unpredictability, irreversibility, increasing levels of organization, far-from-equilibrium conditions, strange attractors, period doubling, intermittency and self-similar fractal organization. Non-linear dynamics may also explain why certain individuals suffer mental disorders while others remain intact during a lifetime of sustained creative output.
Spectral statistics of random geometric graphs
NASA Astrophysics Data System (ADS)
Dettmann, C. P.; Georgiou, O.; Knight, G.
2017-04-01
We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.
NASA Astrophysics Data System (ADS)
Ahmadia, Gabby N.; Tornabene, Luke; Smith, David J.; Pezold, Frank L.
2018-03-01
Factors shaping coral-reef fish species assemblages can operate over a wide range of spatial scales (local versus regional) and across both proximate and evolutionary time. Niche theory and neutral theory provide frameworks for testing assumptions and generating insights about the importance of local versus regional processes. Niche theory postulates that species assemblages are an outcome of evolutionary processes at regional scales followed by local-scale interactions, whereas neutral theory presumes that species assemblages are formed by largely random processes drawing from regional species pools. Indo-Pacific cryptobenthic coral-reef fishes are highly evolved, ecologically diverse, temporally responsive, and situated on a natural longitudinal diversity gradient, making them an ideal group for testing predictions from niche and neutral theories and effects of regional and local processes on species assemblages. Using a combination of ecological metrics (fish density, diversity, assemblage composition) and evolutionary analyses (testing for phylogenetic niche conservatism), we demonstrate that the structure of cryptobenthic fish assemblages can be explained by a mixture of regional factors, such as the size of regional species pools and broad-scale barriers to gene flow/drivers of speciation, coupled with local-scale factors, such as the relative abundance of specific microhabitat types. Furthermore, species of cryptobenthic fishes have distinct microhabitat associations that drive significant differences in assemblage community structure between microhabitat types, and these distinct microhabitat associations are phylogenetically conserved over evolutionary timescales. The implied differential fitness of cryptobenthic fishes across varied microhabitats and the conserved nature of their ecology are consistent with predictions from niche theory. Neutral theory predictions may still hold true for early life-history stages, where stochastic factors may be more important in explaining recruitment. Overall, through integration of ecological and evolutionary techniques, and using multiple spatial scales, our study offers a unique perspective on factors determining coral-reef fish assemblages.
Social patterns revealed through random matrix theory
NASA Astrophysics Data System (ADS)
Sarkar, Camellia; Jalan, Sarika
2014-11-01
Despite the tremendous advancements in the field of network theory, very few studies have taken weights in the interactions into consideration that emerge naturally in all real-world systems. Using random matrix analysis of a weighted social network, we demonstrate the profound impact of weights in interactions on emerging structural properties. The analysis reveals that randomness existing in particular time frame affects the decisions of individuals rendering them more freedom of choice in situations of financial security. While the structural organization of networks remains the same throughout all datasets, random matrix theory provides insight into the interaction pattern of individuals of the society in situations of crisis. It has also been contemplated that individual accountability in terms of weighted interactions remains as a key to success unless segregation of tasks comes into play.
NASA Astrophysics Data System (ADS)
Yang, X.; Zhu, P.; Gu, Y.; Xu, Z.
2015-12-01
Small scale heterogeneities of subsurface medium can be characterized conveniently and effectively using a few simple random medium parameters (RMP), such as autocorrelation length, angle and roughness factor, etc. The estimation of these parameters is significant in both oil reservoir prediction and metallic mine exploration. Poor accuracy and low stability existed in current estimation approaches limit the application of random medium theory in seismic exploration. This study focuses on improving the accuracy and stability of RMP estimation from post-stacked seismic data and its application in the seismic inversion. Experiment and theory analysis indicate that, although the autocorrelation of random medium is related to those of corresponding post-stacked seismic data, the relationship is obviously affected by the seismic dominant frequency, the autocorrelation length, roughness factor and so on. Also the error of calculation of autocorrelation in the case of finite and discrete model decreases the accuracy. In order to improve the precision of estimation of RMP, we design two improved approaches. Firstly, we apply region growing algorithm, which often used in image processing, to reduce the influence of noise in the autocorrelation calculated by the power spectrum method. Secondly, the orientation of autocorrelation is used as a new constraint in the estimation algorithm. The numerical experiments proved that it is feasible. In addition, in post-stack seismic inversion of random medium, the estimated RMP may be used to constrain inverse procedure and to construct the initial model. The experiment results indicate that taking inversed model as random medium and using relatively accurate estimated RMP to construct initial model can get better inversion result, which contained more details conformed to the actual underground medium.
Universality in chaos: Lyapunov spectrum and random matrix theory.
Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki
2018-02-01
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
Universality in chaos: Lyapunov spectrum and random matrix theory
NASA Astrophysics Data System (ADS)
Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki
2018-02-01
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
A generalization of random matrix theory and its application to statistical physics.
Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H
2017-02-01
To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.
Response of moderately thick laminated cross-ply composite shells subjected to random excitation
NASA Technical Reports Server (NTRS)
Elishakoff, Isaak; Cederbaum, Gabriel; Librescu, Liviu
1989-01-01
This study deals with the dynamic response of transverse shear deformable laminated shells subjected to random excitation. The analysis encompasses the following problems: (1) the dynamic response of circular cylindrical shells of finite length excited by an axisymmetric uniform ring loading, stationary in time, and (2) the response of spherical and cylindrical panels subjected to stationary random loadings with uniform spatial distribution. The associated equations governing the structural theory of shells are derived upon discarding the classical Love-Kirchhoff (L-K) assumptions. In this sense, the theory is formulated in the framework of the first-order transverse shear deformation theory (FSDT).
ERIC Educational Resources Information Center
Wilde, Carroll O.
The Poisson probability distribution is seen to provide a mathematical model from which useful information can be obtained in practical applications. The distribution and some situations to which it applies are studied, and ways to find answers to practical questions are noted. The unit includes exercises and a model exam, and provides answers to…
Ultrasensitivity and sharp threshold theorems for multisite systems
NASA Astrophysics Data System (ADS)
Dougoud, M.; Mazza, C.; Vinckenbosch, L.
2017-02-01
This work studies the ultrasensitivity of multisite binding processes where ligand molecules can bind to several binding sites. It considers more particularly recent models involving complex chemical reactions in allosteric phosphorylation processes and for transcription factors and nucleosomes competing for binding on DNA. New statistics-based formulas for the Hill coefficient and the effective Hill coefficient are provided and necessary conditions for a system to be ultrasensitive are exhibited. It is first shown that the ultrasensitivity of binding processes can be approached using sharp-threshold theorems which have been developed in applied probability theory and statistical mechanics for studying sharp threshold phenomena in reliability theory, random graph theory and percolation theory. Special classes of binding process are then introduced and are described as density dependent birth and death process. New precise large deviation results for the steady state distribution of the process are obtained, which permits to show that switch-like ultrasensitive responses are strongly related to the multi-modality of the steady state distribution. Ultrasensitivity occurs if and only if the entropy of the dynamical system has more than one global minimum for some critical ligand concentration. In this case, the Hill coefficient is proportional to the number of binding sites, and the system is highly ultrasensitive. The classical effective Hill coefficient I is extended to a new cooperativity index I q , for which we recommend the computation of a broad range of values of q instead of just the standard one I = I 0.9 corresponding to the 10%-90% variation in the dose-response. It is shown that this single choice can sometimes mislead the conclusion by not detecting ultrasensitivity. This new approach allows a better understanding of multisite ultrasensitive systems and provides new tools for the design of such systems.
A theory of Jovian decameter radiation
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Sharma, R. R.; Papadopoulos, K.; Ben-Ari, M.; Eviatar, A.
1983-01-01
A theory of the Jovian decameter radiation is presented based on the assumed existence of beams of energetic electrons in the inner Jovian magnetosphere. Beam-like electron distributions are shown to be unstable to the growth of both upper hybrid and lower hybrid electrostatic waves. The upconversion of these waves to fast extraordinary mode electromagnetic radiation is calculated by using a fluid model. Two possibilities are considered. First, a random phase approximation is made which leads to a very conservative estimate of intensity that can be expected in decameter radiation. The alternative possibility is also considered, viz, that the upconversion process is coherent. A comparison of both processes suggests that an incoherent interaction may be adequate to account for the observed intensity of decametric radiation, except perhaps near the peak of the spectrum (8 MHz). The coherent process is intrinsically more efficient and can easily produce the observed intensity near 8 MHz if only 0.01% of the energy in the beam is converted to electrostatic energy.
Neutral Community Dynamics and the Evolution of Species Interactions.
Coelho, Marco Túlio P; Rangel, Thiago F
2018-04-01
A contemporary goal in ecology is to determine the ecological and evolutionary processes that generate recurring structural patterns in mutualistic networks. One of the great challenges is testing the capacity of neutral processes to replicate observed patterns in ecological networks, since the original formulation of the neutral theory lacks trophic interactions. Here, we develop a stochastic-simulation neutral model adding trophic interactions to the neutral theory of biodiversity. Without invoking ecological differences among individuals of different species, and assuming that ecological interactions emerge randomly, we demonstrate that a spatially explicit multitrophic neutral model is able to capture the recurrent structural patterns of mutualistic networks (i.e., degree distribution, connectance, nestedness, and phylogenetic signal of species interactions). Nonrandom species distribution, caused by probabilistic events of migration and speciation, create nonrandom network patterns. These findings have broad implications for the interpretation of niche-based processes as drivers of ecological networks, as well as for the integration of network structures with demographic stochasticity.
Sulis, William H
2017-10-01
Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.
Aging Theories for Establishing Safe Life Spans of Airborne Critical Structural Components
NASA Technical Reports Server (NTRS)
Ko, William L.
2003-01-01
New aging theories have been developed to establish the safe life span of airborne critical structural components such as B-52B aircraft pylon hooks for carrying air-launch drop-test vehicles. The new aging theories use the equivalent-constant-amplitude loading spectrum to represent the actual random loading spectrum with the same damaging effect. The crack growth due to random loading cycling of the first flight is calculated using the half-cycle theory, and then extrapolated to all the crack growths of the subsequent flights. The predictions of the new aging theories (finite difference aging theory and closed-form aging theory) are compared with the classical flight-test life theory and the previously developed Ko first- and Ko second-order aging theories. The new aging theories predict the number of safe flights as considerably lower than that predicted by the classical aging theory, and slightly lower than those predicted by the Ko first- and Ko second-order aging theories due to the inclusion of all the higher order terms.
The diversity and unit of reactor noise theory
NASA Astrophysics Data System (ADS)
Kuang, Zhifeng
The study of reactor noise theory concerns questions about cause and effect relationships, and utilisation of random noise in nuclear reactor systems. The diversity of reactor noise theory arises from the variety of noise sources, the various mathematical treatments applied and various practical purposes. The neutron noise in zero- energy systems arises from the fluctuations in the number of neutrons per fission, the time between nuclear events, and the type of reactions. It can be used to evaluate system parameters. The mathematical treatment is based on the master equation of stochastic branching processes. The noise in power reactor systems is given rise by random processes of technological origin such as vibration of mechanical parts, boiling of the coolant, fluctuations of temperature and pressure. It can be used to monitor reactor behaviour with the possibility of detecting malfunctions at an early stage. The mathematical treatment is based on the Langevin equation. The unity of reactor noise theory arises from the fact that useful information from noise is embedded in the second moments of random variables, which lends the possibility of building up a unified mathematical description and analysis of the various reactor noise sources. Exploring such possibilities is the main subject among the three major topics reported in this thesis. The first subject is within the zero power noise in steady media, and we reported on the extension of the existing theory to more general cases. In Paper I, by use of the master equation approach, we have derived the most general Feynman- and Rossi-alpha formulae so far by taking the full joint statistics of the prompt and all the six groups of delayed neutron precursors, and a multiple emission source into account. The involved problems are solved with a combination of effective analytical techniques and symbolic algebra codes (Mathematica). Paper II gives a numerical evaluation of these formulae. An assessment of the contribution of the terms that are novel as compared to the traditional formulae has been made. The second subject treats a problem in power reactor noise with the Langevin formalism. With a very few exceptions, in all previous work the diffusion approximation was used. In order to extend the treatment to transport theory, in Paper III, we introduced a novel method, i.e. Padé approximation via Lanczos algorithm to calculate the transfer function of a finite slab reactor described by one-group transport equation. It was found that the local-global decomposition of the neutron noise, formerly only reproduced in at least 2- group theory, can be reconstructed. We have also showed the existence of a boundary layer of the neutron noise close to the boundary. Finally, we have explored the possibility of building up a unified theory to account for the coexistence of zero power and power reactor noise in a system. In Paper IV, a unified description of the neutron noise is given by the use of backward master equations in a model where the cross section fluctuations are given as a simple binary pseudorandom process. The general solution contains both the zero power and power reactor noise concurrently, and they can be extracted individually as limiting cases of the general solution. It justified the separate treatments of zero power and power reactor noise. The result was extended to the case including one group of delayed neutron precursors in Paper V.
Application of rrm as behavior mode choice on modelling transportation
NASA Astrophysics Data System (ADS)
Surbakti, M. S.; Sadullah, A. F.
2018-03-01
Transportation mode selection, the first step in transportation planning process, is probably one of the most important planning elements. The development of models that can explain the preference of passengers regarding their chosen mode of public transport option will contribute to the improvement and development of existing public transport. Logit models have been widely used to determine the mode choice models in which the alternative are different transport modes. Random Regret Minimization (RRM) theory is a theory developed from the behavior to choose (choice behavior) in a state of uncertainty. During its development, the theory was used in various disciplines, such as marketing, micro economy, psychology, management, and transportation. This article aims to show the use of RRM in various modes of selection, from the results of various studies that have been conducted both in north sumatera and western Java.
Failure and recovery in dynamical networks.
Böttcher, L; Luković, M; Nagler, J; Havlin, S; Herrmann, H J
2017-02-03
Failure, damage spread and recovery crucially underlie many spatially embedded networked systems ranging from transportation structures to the human body. Here we study the interplay between spontaneous damage, induced failure and recovery in both embedded and non-embedded networks. In our model the network's components follow three realistic processes that capture these features: (i) spontaneous failure of a component independent of the neighborhood (internal failure), (ii) failure induced by failed neighboring nodes (external failure) and (iii) spontaneous recovery of a component. We identify a metastable domain in the global network phase diagram spanned by the model's control parameters where dramatic hysteresis effects and random switching between two coexisting states are observed. This dynamics depends on the characteristic link length of the embedded system. For the Euclidean lattice in particular, hysteresis and switching only occur in an extremely narrow region of the parameter space compared to random networks. We develop a unifying theory which links the dynamics of our model to contact processes. Our unifying framework may help to better understand controllability in spatially embedded and random networks where spontaneous recovery of components can mitigate spontaneous failure and damage spread in dynamical networks.
CMV matrices in random matrix theory and integrable systems: a survey
NASA Astrophysics Data System (ADS)
Nenciu, Irina
2006-07-01
We present a survey of recent results concerning a remarkable class of unitary matrices, the CMV matrices. We are particularly interested in the role they play in the theory of random matrices and integrable systems. Throughout the paper we also emphasize the analogies and connections to Jacobi matrices.
Random walk in generalized quantum theory
NASA Astrophysics Data System (ADS)
Martin, Xavier; O'Connor, Denjoe; Sorkin, Rafael D.
2005-01-01
One can view quantum mechanics as a generalization of classical probability theory that provides for pairwise interference among alternatives. Adopting this perspective, we “quantize” the classical random walk by finding, subject to a certain condition of “strong positivity”, the most general Markovian, translationally invariant “decoherence functional” with nearest neighbor transitions.
Stochastic climate dynamics: Stochastic parametrizations and their global effects
NASA Astrophysics Data System (ADS)
Ghil, Michael
2010-05-01
A well-known difficulty in modeling the atmosphere and oceans' general circulation is the limited, albeit increasing resolution possible in the numerical solution of the governing partial differential equations. While the mass, energy and momentum of an individual cloud, in the atmosphere, or convection chimney, in the oceans, is negligible, their combined effects over long times are not. Until recently, small, subgrid-scale processes were represented in general circulation models (GCMs) by deterministic "parametrizations." While A. Arakawa and associates had realized over three decades ago the conceptual need for ensembles of clouds in such parametrizations, it is only very recently that truly stochastic parametrizations have been introduced into GCMs and weather prediction models. These parametrizations essentially transform a deterministic autonomous system into a non-autonomous one, subject to random forcing. To study systematically the long-term effects of such a forcing has to rely on theory of random dynamical systems (RDS). This theory allows one to consider the detailed geometric structure of the random attractors associated with nonlinear, stochastically perturbed systems. These attractors extend the concept of strange attractors from autonomous dynamical systems to non-autonomous systems with random forcing. To illustrate the essence of the theory, its concepts and methods, we carry out a high-resolution numerical study of two "toy" models in their respective phase spaces. This study allows one to obtain a good approximation of their global random attractors, as well as of the time-dependent invariant measures supported by these attractors. The first of the two models studied herein is the Arnol'd family of circle maps in the presence of noise. The maps' fine-grained, resonant landscape --- associated with Arnol'd tongues --- is smoothed by the noise, thus permitting a comparison with the observable aspects of the "Devil's staircase" that arises in modeling the El Nino-Southern Oscillation (ENSO). These results are confirmed by studying a "French garden" that is obtained by smoothing a "Devil's quarry." Such a quarry results from coupling two circle maps, and random forcing leads to a smoothed version thereof. We thus suspect that stochastic parametrizations will stabilize the sensitive dependence on parameters that has been noticed in the development of GCMs. This talk represents joint work with Mickael D. Chekroun, D. Kondrashov, Eric Simonnet and I. Zaliapin. Several other talks and posters complement the results presented here and provide further insights into RDS theory and its application to the geosciences.
Strauman, Timothy J; Eddington, Kari M
2017-02-01
Self-regulation models of psychopathology provide a theory-based, empirically supported framework for developing psychotherapeutic interventions that complement and extend current cognitive-behavioral models. However, many clinicians are only minimally familiar with the psychology of self-regulation. The aim of the present manuscript is twofold. First, we provide an overview of self-regulation as a motivational process essential to well-being and introduce two related theories of self-regulation which have been applied to depression. Second, we describe how self-regulatory concepts and processes from those two theories have been translated into psychosocial interventions, focusing specifically on self-system therapy (SST), a brief structured treatment for depression that targets personal goal pursuit. Two randomized controlled trials have shown that SST is superior to cognitive therapy for depressed clients with specific self-regulatory deficits, and both studies found evidence that SST works in part by restoring adaptive self-regulation. Self-regulation-based psychotherapeutic approaches to depression hold significant promise for enhancing treatment efficacy and ultimately may provide an individualizable framework for treatment planning.
The supersymmetric method in random matrix theory and applications to QCD
NASA Astrophysics Data System (ADS)
Verbaarschot, Jacobus
2004-12-01
The supersymmetric method is a powerful method for the nonperturbative evaluation of quenched averages in disordered systems. Among others, this method has been applied to the statistical theory of S-matrix fluctuations, the theory of universal conductance fluctuations and the microscopic spectral density of the QCD Dirac operator. We start this series of lectures with a general review of Random Matrix Theory and the statistical theory of spectra. An elementary introduction of the supersymmetric method in Random Matrix Theory is given in the second and third lecture. We will show that a Random Matrix Theory can be rewritten as an integral over a supermanifold. This integral will be worked out in detail for the Gaussian Unitary Ensemble that describes level correlations in systems with broken time-reversal invariance. We especially emphasize the role of symmetries. As a second example of the application of the supersymmetric method we discuss the calculation of the microscopic spectral density of the QCD Dirac operator. This is the eigenvalue density near zero on the scale of the average level spacing which is known to be given by chiral Random Matrix Theory. Also in this case we use symmetry considerations to rewrite the generating function for the resolvent as an integral over a supermanifold. The main topic of the second last lecture is the recent developments on the relation between the supersymmetric partition function and integrable hierarchies (in our case the Toda lattice hierarchy). We will show that this relation is an efficient way to calculate superintegrals. Several examples that were given in previous lectures will be worked out by means of this new method. Finally, we will discuss the quenched QCD Dirac spectrum at nonzero chemical potential. Because of the nonhermiticity of the Dirac operator the usual supersymmetric method has not been successful in this case. However, we will show that the supersymmetric partition function can be evaluated by means of the replica limit of the Toda lattice equation.
Monotonic entropy growth for a nonlinear model of random exchanges.
Apenko, S M
2013-02-01
We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.
Probabilistic structural mechanics research for parallel processing computers
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.
1991-01-01
Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.
Monotonic entropy growth for a nonlinear model of random exchanges
NASA Astrophysics Data System (ADS)
Apenko, S. M.
2013-02-01
We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific “coarse graining” of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.
CR-Calculus and adaptive array theory applied to MIMO random vibration control tests
NASA Astrophysics Data System (ADS)
Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.
2016-09-01
Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.
Anticipatory processing in social anxiety: Investigation using attentional control theory.
Sluis, Rachel A; Boschen, Mark J; Neumann, David L; Murphy, Karen
2017-12-01
Cognitive models of social anxiety disorder (SAD) emphasize anticipatory processing as a prominent maintaining factor occurring before social-evaluative events. While anticipatory processing is a maladaptive process, the cognitive mechanisms that underlie ineffective control of attention are still unclear. The present study tested predictions derived from attentional control theory in a sample of undergraduate students high and low on social anxiety symptoms. Participants were randomly assigned to either engage in anticipatory processing prior to a threat of a speech task or a control condition with no social evaluative threat. After completing a series of questionnaires, participants performed pro-saccades and antisaccades in response to peripherally presented facial expressions presented in either single-task or mixed-task blocks. Correct antisaccade latencies were longer than correct pro-saccade latencies in-line with attentional control theory. High socially anxious individuals who anticipated did not exhibit impairment on the inhibition and shifting functions compared to high socially anxious individuals who did not anticipate or low socially anxious individuals in either the anticipatory or control condition. Low socially anxious individuals who anticipated exhibited shorter antisaccade latencies and a switch benefit compared to low socially anxious individuals in the control condition. The study used an analogue sample; however findings from analogue samples are generally consistent with clinical samples. The findings suggest that social threat induced anticipatory processing facilitates executive functioning for low socially anxious individuals when anticipating a social-evaluative situation. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Correlations and analytical approaches to co-evolving voter models
NASA Astrophysics Data System (ADS)
Ji, M.; Xu, C.; Choi, C. W.; Hui, P. M.
2013-11-01
The difficulty in formulating analytical treatments in co-evolving networks is studied in light of the Vazquez-Eguíluz-San Miguel voter model (VM) and a modified VM (MVM) that introduces a random mutation of the opinion as a noise in the VM. The density of active links, which are links that connect the nodes of opposite opinions, is shown to be highly sensitive to both the degree k of a node and the active links n among the neighbors of a node. We test the validity in the formalism of analytical approaches and show explicitly that the assumptions behind the commonly used homogeneous pair approximation scheme in formulating a mean-field theory are the source of the theory's failure due to the strong correlations between k, n and n2. An improved approach that incorporates spatial correlation to the nearest-neighbors explicitly and a random approximation for the next-nearest neighbors is formulated for the VM and the MVM, and it gives better agreement with the simulation results. We introduce an empirical approach that quantifies the correlations more accurately and gives results in good agreement with the simulation results. The work clarifies why simply mean-field theory fails and sheds light on how to analyze the correlations in the dynamic equations that are often generated in co-evolving processes.
From inanimate matter to living systems
NASA Technical Reports Server (NTRS)
Fox, S. W.
1980-01-01
Since the early part of this century, the Genesis account of the origin and evolution of life has been explained as an extrapolation of astronomical and geochemical processes. The essence of the answer to date is a protoreproductive protocell of much biochemical and cytophysical competance. The processes of its origin, molecular ordering, and its functions are described. A crucial understanding is that of the nonrandomness of evolutionary processes at all stages (with perhaps a minor statistical component). In this way, evolution conflicts with statistical randomness; the latter is a favorite assumption of both scientific and creationistic critics of the proteinoid theory. The principle contribution of the proteinoid theory to the understanding of general biology is to particularize the view that evolutionary direction is rooted in the shapes of molecules, in stereochemistry. After molecules of the right kind first assembled to protocells, life in its various stages of evolution was an inevitable consequence. It is molecules that continue to assemble as part of living process and, in the role of enzymes, continue to direct life cycle of the cell.
Gerbasi, David; Shapiro, Moshe; Brumer, Paul
2006-02-21
Enantiomeric control of 1,3 dimethylallene in a collisional environment is examined. Specifically, our previous "laser distillation" scenario wherein three perpendicular linearly polarized light fields are applied to excite a set of vib-rotational eigenstates of a randomly oriented sample is considered. The addition of internal conversion, dissociation, decoherence, and collisional relaxation mimics experimental conditions and molecular decay processes. Of greatest relevance is internal conversion which, in the case of dimethylallene, is followed by molecular dissociation. For various rates of internal conversion, enantiomeric control is maintained in this scenario by a delicate balance between collisional relaxation of excited dimethylallene that enhances control and collisional dephasing, which diminishes control.
Study on a novel laser target detection system based on software radio technique
NASA Astrophysics Data System (ADS)
Song, Song; Deng, Jia-hao; Wang, Xue-tian; Gao, Zhen; Sun, Ji; Sun, Zhi-hui
2008-12-01
This paper presents that software radio technique is applied to laser target detection system with the pseudo-random code modulation. Based on the theory of software radio, the basic framework of the system, hardware platform, and the implementation of the software system are detailed. Also, the block diagram of the system, DSP circuit, block diagram of the pseudo-random code generator, and soft flow diagram of signal processing are designed. Experimental results have shown that the application of software radio technique provides a novel method to realize the modularization, miniaturization and intelligence of the laser target detection system, and the upgrade and improvement of the system will become simpler, more convenient, and cheaper.
Estimating the number of motor units using random sums with independently thinned terms.
Müller, Samuel; Conforto, Adriana Bastos; Z'graggen, Werner J; Kaelin-Lang, Alain
2006-07-01
The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.
Critical behavior of the contact process on small-world networks
NASA Astrophysics Data System (ADS)
Ferreira, Ronan S.; Ferreira, Silvio C.
2013-11-01
We investigate the role of clustering on the critical behavior of the contact process (CP) on small-world networks using the Watts-Strogatz (WS) network model with an edge rewiring probability p. The critical point is well predicted by a homogeneous cluster-approximation for the limit of vanishing clustering ( p → 1). The critical exponents and dimensionless moment ratios of the CP are in agreement with those predicted by the mean-field theory for any p > 0. This independence on the network clustering shows that the small-world property is a sufficient condition for the mean-field theory to correctly predict the universality of the model. Moreover, we compare the CP dynamics on WS networks with rewiring probability p = 1 and random regular networks and show that the weak heterogeneity of the WS network slightly changes the critical point but does not alter other critical quantities of the model.
Effect of wave localization on plasma instabilities
NASA Astrophysics Data System (ADS)
Levedahl, William Kirk
1987-10-01
The Anderson model of wave localization in random media is involved to study the effect of solar wind density turbulence on plasma processes associated with the solar type III radio burst. ISEE-3 satellite data indicate that a possible model for the type III process is the parametric decay of Langmuir waves excited by solar flare electron streams into daughter electromagnetic and ion acoustic waves. The threshold for this instability, however, is much higher than observed Langmuir wave levels because of rapid wave convection of the transverse electromagnetic daughter wave in the case where the solar wind is assumed homogeneous. Langmuir and transverse waves near critical density satisfy the Ioffe-Reigel criteria for wave localization in the solar wind with observed density fluctuations -1 percent. Numerical simulations of wave propagation in random media confirm the localization length predictions of Escande and Souillard for stationary density fluctations. For mobile density fluctuations localized wave packets spread at the propagation velocity of the density fluctuations rather than the group velocity of the waves. Computer simulations using a linearized hybrid code show that an electron beam will excite localized Langmuir waves in a plasma with density turbulence. An action principle approach is used to develop a theory of non-linear wave processes when waves are localized. A theory of resonant particles diffusion by localized waves is developed to explain the saturation of the beam-plasma instability. It is argued that localization of electromagnetic waves will allow the instability threshold to be exceeded for the parametric decay discussed above.
Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.
Jovanović, Stojan; Rotter, Stefan
2016-06-01
The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.
Zeinab, Jalambadani; Gholamreza, Garmaroudi; Mehdi, Yaseri; Mahmood, Tavousi; Korush, Jafarian
2017-09-21
The Trans-Theoretical model (TTM) and Theory of Planned Behaviour (TPB) may be promising models for understanding and predicting reduction in the consumption of fast food. The aim of this study was to examine the applicability of the Trans-Theoretical model (TTM) and the additional predictive role of the subjective norms and perceived behavioural control in predicting reduction consumption of fast food in obese Iranian adolescent girls. A cross sectional study design was conducted among twelve randomly selected schools in Sabzevar, Iran from 2015 to 2017. Four hundred eighty five randomly selected students consented to participate in the study. Hierarchical regression models used to predict the role of important variables that can influence the reduction in the consumption of fast food among students. using SPSS version 22. Variables Perceived behavioural control (r=0.58, P<0.001), Subjective norms (r=0.51, P<0.001), self-efficacy (r=0.49, P<0.001), decisional balance (pros) (r=0.29, P<0.001), decisional balance (cons) (r=0.25, P<0.001), stage of change (r=0.38, P<0.001), were significantly and positively correlated while experiential processes of change (r=0.08, P=0.135) and behavioural processes of change (r=0.09, P=0.145), were not significant. The study demonstrated that the TTM (except the experiential and behavioural processes of change) focusing on the perceived behavioural control and subjective norms are useful models for reduction in the consumption of fast food.
Scattering theory of efficient quantum transport across finite networks
NASA Astrophysics Data System (ADS)
Walschaers, Mattia; Mulet, Roberto; Buchleitner, Andreas
2017-11-01
We present a scattering theory for the efficient transmission of an excitation across a finite network with designed disorder. We show that the presence of randomly positioned network sites allows significant acceleration of the excitation transfer processes as compared to a dimer structure, but only if the disordered Hamiltonians are constrained to be centrosymmetric and exhibit a dominant doublet in their spectrum. We identify the cause of this efficiency enhancement to be the constructive interplay between disorder-induced fluctuations of the dominant doublet’s splitting and the coupling strength between the input and output sites to the scattering channels. We find that the characteristic strength of these fluctuations together with the channel coupling fully control the transfer efficiency.
The fast algorithm of spark in compressive sensing
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.
Boson expansions based on the random phase approximation representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedrocchi, V.G.; Tamura, T.
1984-04-01
A new boson expansion theory based on the random phase approximation is presented. The boson expansions are derived here directly in the random phase approximation representation with the help of a technique that combines the use of the Usui operator with that of a new bosonization procedure, called the term-by-term bosonization method. The present boson expansion theory is constructed by retaining a single collective quadrupole random phase approximation component, a truncation that allows for a perturbative treatment of the whole problem. Both Hermitian, as well as non-Hermitian boson expansions, valid for even nuclei, are obtained.
The investigation of social networks based on multi-component random graphs
NASA Astrophysics Data System (ADS)
Zadorozhnyi, V. N.; Yudin, E. B.
2018-01-01
The methods of non-homogeneous random graphs calibration are developed for social networks simulation. The graphs are calibrated by the degree distributions of the vertices and the edges. The mathematical foundation of the methods is formed by the theory of random graphs with the nonlinear preferential attachment rule and the theory of Erdôs-Rényi random graphs. In fact, well-calibrated network graph models and computer experiments with these models would help developers (owners) of the networks to predict their development correctly and to choose effective strategies for controlling network projects.
Disordered quivers and cold horizons
Anninos, Dionysios; Anous, Tarek; Denef, Frederik
2016-12-15
We analyze the low temperature structure of a supersymmetric quiver quantum mechanics with randomized superpotential coefficients, treating them as quenched disorder. These theories describe features of the low energy dynamics of wrapped branes, which in large number backreact into extremal black holes. We show that the low temperature theory, in the limit of a large number of bifundamentals, exhibits a time reparametrization symmetry as well as a specific heat linear in the temperature. Both these features resemble the behavior of black hole horizons in the zero temperature limit. We demonstrate similarities between the low temperature physics of the random quivermore » model and a theory of large N free fermions with random masses.« less
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2017-06-01
Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.
Diffusion in the presence of a local attracting factor: Theory and interdisciplinary applications.
Veermäe, Hardi; Patriarca, Marco
2017-06-01
In many complex diffusion processes the drift of random walkers is not caused by an external force, as in the case of Brownian motion, but by local variations of fitness perceived by the random walkers. In this paper, a simple but general framework is presented that describes such a type of random motion and may be of relevance in different problems, such as opinion dynamics, cultural spreading, and animal movement. To this aim, we study the problem of a random walker in d dimensions moving in the presence of a local heterogeneous attracting factor expressed in terms of an assigned position-dependent "attractiveness function." At variance with standard Brownian motion, the attractiveness function introduced here regulates both the advection and diffusion of the random walker, thus providing testable predictions for a specific form of fluctuation-relations. We discuss the relation between the drift-diffusion equation based on the attractiveness function and that describing standard Brownian motion, and we provide some explicit examples illustrating its relevance in different fields, such as animal movement, chemotactic diffusion, and social dynamics.
Spatio-temporal Hotelling observer for signal detection from image sequences
Caucci, Luca; Barrett, Harrison H.; Rodríguez, Jeffrey J.
2010-01-01
Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection. PMID:19550494
Spatio-temporal Hotelling observer for signal detection from image sequences.
Caucci, Luca; Barrett, Harrison H; Rodriguez, Jeffrey J
2009-06-22
Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.
Individualizing drug dosage with longitudinal data.
Zhu, Xiaolu; Qu, Annie
2016-10-30
We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Diffusion in the presence of a local attracting factor: Theory and interdisciplinary applications
NASA Astrophysics Data System (ADS)
Veermäe, Hardi; Patriarca, Marco
2017-06-01
In many complex diffusion processes the drift of random walkers is not caused by an external force, as in the case of Brownian motion, but by local variations of fitness perceived by the random walkers. In this paper, a simple but general framework is presented that describes such a type of random motion and may be of relevance in different problems, such as opinion dynamics, cultural spreading, and animal movement. To this aim, we study the problem of a random walker in d dimensions moving in the presence of a local heterogeneous attracting factor expressed in terms of an assigned position-dependent "attractiveness function." At variance with standard Brownian motion, the attractiveness function introduced here regulates both the advection and diffusion of the random walker, thus providing testable predictions for a specific form of fluctuation-relations. We discuss the relation between the drift-diffusion equation based on the attractiveness function and that describing standard Brownian motion, and we provide some explicit examples illustrating its relevance in different fields, such as animal movement, chemotactic diffusion, and social dynamics.
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
Application of Solidification Theory to Rapid Solidification Processing
1983-08-01
1879 (1982). E 7] W. J. Boettinger, R. J. Schaefer, F. Biancaniello, and D. Shechtman, Met. Trans. A ., to be published. E 8] W. J. Bettinger , S. R...solidification velocity which produce a special "banded" microstructure in Ag-Cu alloys. Related lower bound to theoretical limits on solidification...partitionless rapid solidifi- cation of NiAl-Cr quasibinary eutectic alloy rather than a disordered structure incorporating Ni and Al into Cr randomly
No extension of quantum theory can have improved predictive power.
Colbeck, Roger; Renner, Renato
2011-08-02
According to quantum theory, measurements generate random outcomes, in stark contrast with classical mechanics. This raises the question of whether there could exist an extension of the theory that removes this indeterminism, as suspected by Einstein, Podolsky and Rosen. Although this has been shown to be impossible, existing results do not imply that the current theory is maximally informative. Here we ask the more general question of whether any improved predictions can be achieved by any extension of quantum theory. Under the assumption that measurements can be chosen freely, we answer this question in the negative: no extension of quantum theory can give more information about the outcomes of future measurements than quantum theory itself. Our result has significance for the foundations of quantum mechanics, as well as applications to tasks that exploit the inherent randomness in quantum theory, such as quantum cryptography.
No extension of quantum theory can have improved predictive power
Colbeck, Roger; Renner, Renato
2011-01-01
According to quantum theory, measurements generate random outcomes, in stark contrast with classical mechanics. This raises the question of whether there could exist an extension of the theory that removes this indeterminism, as suspected by Einstein, Podolsky and Rosen. Although this has been shown to be impossible, existing results do not imply that the current theory is maximally informative. Here we ask the more general question of whether any improved predictions can be achieved by any extension of quantum theory. Under the assumption that measurements can be chosen freely, we answer this question in the negative: no extension of quantum theory can give more information about the outcomes of future measurements than quantum theory itself. Our result has significance for the foundations of quantum mechanics, as well as applications to tasks that exploit the inherent randomness in quantum theory, such as quantum cryptography. PMID:21811240
Random potentials and cosmological attractors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linde, Andrei, E-mail: alinde@stanford.edu
I show that the problem of realizing inflation in theories with random potentials of a limited number of fields can be solved, and agreement with the observational data can be naturally achieved if at least one of these fields has a non-minimal kinetic term of the type used in the theory of cosmological α-attractors.
Randomized Item Response Theory Models
ERIC Educational Resources Information Center
Fox, Jean-Paul
2005-01-01
The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR…
Constructing acoustic timefronts using random matrix theory.
Hegewisch, Katherine C; Tomsovic, Steven
2013-10-01
In a recent letter [Hegewisch and Tomsovic, Europhys. Lett. 97, 34002 (2012)], random matrix theory is introduced for long-range acoustic propagation in the ocean. The theory is expressed in terms of unitary propagation matrices that represent the scattering between acoustic modes due to sound speed fluctuations induced by the ocean's internal waves. The scattering exhibits a power-law decay as a function of the differences in mode numbers thereby generating a power-law, banded, random unitary matrix ensemble. This work gives a more complete account of that approach and extends the methods to the construction of an ensemble of acoustic timefronts. The result is a very efficient method for studying the statistical properties of timefronts at various propagation ranges that agrees well with propagation based on the parabolic equation. It helps identify which information about the ocean environment can be deduced from the timefronts and how to connect features of the data to that environmental information. It also makes direct connections to methods used in other disordered waveguide contexts where the use of random matrix theory has a multi-decade history.
Nilsson, Håkan; Juslin, Peter; Winman, Anders
2016-01-01
Costello and Watts (2014) present a model assuming that people's knowledge of probabilities adheres to probability theory, but that their probability judgments are perturbed by a random noise in the retrieval from memory. Predictions for the relationships between probability judgments for constituent events and their disjunctions and conjunctions, as well as for sums of such judgments were derived from probability theory. Costello and Watts (2014) report behavioral data showing that subjective probability judgments accord with these predictions. Based on the finding that subjective probability judgments follow probability theory, Costello and Watts (2014) conclude that the results imply that people's probability judgments embody the rules of probability theory and thereby refute theories of heuristic processing. Here, we demonstrate the invalidity of this conclusion by showing that all of the tested predictions follow straightforwardly from an account assuming heuristic probability integration (Nilsson, Winman, Juslin, & Hansson, 2009). We end with a discussion of a number of previous findings that harmonize very poorly with the predictions by the model suggested by Costello and Watts (2014). (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Lu, Zhong-Lin; Sperling, George
2002-10-01
Two theories are considered to account for the perception of motion of depth-defined objects in random-dot stereograms (stereomotion). In the LuSperling three-motion-systems theory J. Opt. Soc. Am. A 18 , 2331 (2001), stereomotion is perceived by the third-order motion system, which detects the motion of areas defined as figure (versus ground) in a salience map. Alternatively, in his comment J. Opt. Soc. Am. A 19 , 2142 (2002), Patterson proposes a low-level motion-energy system dedicated to stereo depth. The critical difference between these theories is the preprocessing (figureground based on depth and other cues versus simply stereo depth) rather than the motion-detection algorithm itself (because the motion-extraction algorithm for third-order motion is undetermined). Furthermore, the ability of observers to perceive motion in alternating feature displays in which stereo depth alternates with other features such as texture orientation indicates that the third-order motion system can perceive stereomotion. This reduces the stereomotion question to Is it third-order alone or third-order plus dedicated depth-motion processing? Two new experiments intended to support the dedicated depth-motion processing theory are shown here to be perfectly accounted for by third-order motion, as are many older experiments that have previously been shown to be consistent with third-order motion. Cyclopean and rivalry images are shown to be a likely confound in stereomotion studies, rivalry motion being as strong as stereomotion. The phase dependence of superimposed same-direction stereomotion stimuli, rivalry stimuli, and isoluminant color stimuli indicates that these stimuli are processed in the same (third-order) motion system. The phase-dependence paradigm Lu and Sperling, Vision Res. 35 , 2697 (1995) ultimately can resolve the question of which types of signals share a single motion detector. All the evidence accumulated so far is consistent with the three-motion-systems theory. 2002 Optical Society of America
Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray
2014-05-13
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.
Wright's Shifting Balance Theory and the Diversification of Aposematic Signals
Chouteau, Mathieu; Angers, Bernard
2012-01-01
Despite accumulating evidence for selection within natural systems, the importance of random genetic drift opposing Wright's and Fisher's views of evolution continue to be a subject of controversy. The geographical diversification of aposematic signals appears to be a suitable system to assess the factors involved in the process of adaptation since both theories were independently proposed to explain this phenomenon. In the present study, the effects of drift and selection were assessed from population genetics and predation experiments on poison-dart frogs, Ranitomaya imitator, of Northern Peru. We specifically focus on the transient zone between two distinct aposematic signals. In contrast to regions where high predation maintains a monomorphic aposematic signal, the transient zones are characterized by lowered selection and a high phenotypic diversity. As a result, the diversification of phenotypes may occur via genetic drift without a significant loss of fitness. These new phenotypes may then colonize alternative habitats if successfully recognized and avoided by predators. This study highlights the interplay between drift and selection as determinant processes in the adaptive diversification of aposematic signals. Results are consistent with the expectations of the Wright's shifting balance theory and represent, to our knowledge, the first empirical demonstration of this highly contested theory in a natural system. PMID:22470509
Ensemble Bayesian forecasting system Part I: Theory and algorithms
NASA Astrophysics Data System (ADS)
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.
Pinto, B M; Lynn, H; Marcus, B H; DePue, J; Goldstein, M G
2001-01-01
In theory-based interventions for behavior change, there is a need to examine the effects of interventions on the underlying theoretical constructs and the mediating role of such constructs. These two questions are addressed in the Physically Active for Life study, a randomized trial of physician-based exercise counseling for older adults. Three hundred fifty-five patients participated (intervention n = 181, control n = 174; mean age = 65.6 years). The underlying theories used were the Transtheoretical Model, Social Cognitive Theory and the constructs of decisional balance (benefits and barriers), self-efficacy, and behavioral and cognitive processes of change. Motivational readiness for physical activity and related constructs were assessed at baseline, 6 weeks, and 8 months. Linear or logistic mixed effects models were used to examine intervention effects on the constructs, and logistic mixed effects models were used for mediator analyses. At 6 weeks, the intervention had significant effects on decisional balance, self-efficacy, and behavioral processes, but these effects were not maintained at 8 months. At 6 weeks, only decisional balance and behavioral processes were identified as mediators of motivational readiness outcomes. Results suggest that interventions of greater intensity and duration may be needed for sustained changes in mediators and motivational readiness for physical activity among older adults.
Local Random Quantum Circuits are Approximate Polynomial-Designs
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Harrow, Aram W.; Horodecki, Michał
2016-09-01
We prove that local random quantum circuits acting on n qubits composed of O( t 10 n 2) many nearest neighbor two-qubit gates form an approximate unitary t-design. Previously it was unknown whether random quantum circuits were a t-design for any t > 3. The proof is based on an interplay of techniques from quantum many-body theory, representation theory, and the theory of Markov chains. In particular we employ a result of Nachtergaele for lower bounding the spectral gap of frustration-free quantum local Hamiltonians; a quasi-orthogonality property of permutation matrices; a result of Oliveira which extends to the unitary group the path-coupling method for bounding the mixing time of random walks; and a result of Bourgain and Gamburd showing that dense subgroups of the special unitary group, composed of elements with algebraic entries, are ∞-copy tensor-product expanders. We also consider pseudo-randomness properties of local random quantum circuits of small depth and prove that circuits of depth O( t 10 n) constitute a quantum t-copy tensor-product expander. The proof also rests on techniques from quantum many-body theory, in particular on the detectability lemma of Aharonov, Arad, Landau, and Vazirani. We give applications of the results to cryptography, equilibration of closed quantum dynamics, and the generation of topological order. In particular we show the following pseudo-randomness property of generic quantum circuits: Almost every circuit U of size O( n k ) on n qubits cannot be distinguished from a Haar uniform unitary by circuits of size O( n ( k-9)/11) that are given oracle access to U.
Contextual community prevention theory: building interventions with community agency collaboration.
Morales, Eduardo S
2009-11-01
Translation from research to practice faces numerous problems that include replicating effectiveness, fidelity to the protocol and processes, and adaptations to different types of target populations. Working collaboratively with existing service providers can speed up the time for development and can ease the implementation of empirical randomized trials. Contextual community prevention theory is an innovative approach that focuses on changing behaviors of community members by creating a visible institutional presence that draws and pulls the targeted population into the organization's activities and interventions. The result is an institution or organization within the community that provides a new active and dynamic context, engaging its community members into its activities, interventions, and functions. An HIV prevention program developed collaboratively from the ground up for Latino gay/bisexual men is presented. Results from the program evaluation efforts across the years suggest promise for testing its efficacy through a randomized trial. HIV prevention efforts need to develop dynamic support systems within communities where these men have ownership, have control, and feel safe; otherwise HIV infection rates in this population will increase. Copyright 2009 by the American Psychological Association
Bayesian Regression with Network Prior: Optimal Bayesian Filtering Perspective
Qian, Xiaoning; Dougherty, Edward R.
2017-01-01
The recently introduced intrinsically Bayesian robust filter (IBRF) provides fully optimal filtering relative to a prior distribution over an uncertainty class ofjoint random process models, whereas formerly the theory was limited to model-constrained Bayesian robust filters, for which optimization was limited to the filters that are optimal for models in the uncertainty class. This paper extends the IBRF theory to the situation where there are both a prior on the uncertainty class and sample data. The result is optimal Bayesian filtering (OBF), where optimality is relative to the posterior distribution derived from the prior and the data. The IBRF theories for effective characteristics and canonical expansions extend to the OBF setting. A salient focus of the present work is to demonstrate the advantages of Bayesian regression within the OBF setting over the classical Bayesian approach in the context otlinear Gaussian models. PMID:28824268
Testing the criterion for correct convergence in the complex Langevin method
NASA Astrophysics Data System (ADS)
Nagata, Keitaro; Nishimura, Jun; Shimasaki, Shinji
2018-05-01
Recently the complex Langevin method (CLM) has been attracting attention as a solution to the sign problem, which occurs in Monte Carlo calculations when the effective Boltzmann weight is not real positive. An undesirable feature of the method, however, was that it can happen in some parameter regions that the method yields wrong results even if the Langevin process reaches equilibrium without any problem. In our previous work, we proposed a practical criterion for correct convergence based on the probability distribution of the drift term that appears in the complex Langevin equation. Here we demonstrate the usefulness of this criterion in two solvable theories with many dynamical degrees of freedom, i.e., two-dimensional Yang-Mills theory with a complex coupling constant and the chiral Random Matrix Theory for finite density QCD, which were studied by the CLM before. Our criterion can indeed tell the parameter regions in which the CLM gives correct results.
Theory of Dielectric Breakdown in Randomly Inhomogeneous Materials
NASA Astrophysics Data System (ADS)
Gyure, Mark Franklin
1990-01-01
Two models of dielectric breakdown in disordered metal-insulator composites have been developed in an attempt to explain in detail the greatly reduced breakdown electric field observed in these materials. The first model is a two dimensional model in which the composite is treated as a random array of conducting cylinders embedded in an otherwise uniform dielectric background. The two dimensional samples are generated by the Monte Carlo method and a discretized version of the integral form of Laplace's equation is solved to determine the electric field in each sample. Breakdown is modeled as a quasi-static process by which one breakdown at a time occurs at the point of maximum electric field in the system. A cascade of these local breakdowns leads to complete dielectric failure of the system after which the breakdown field can be determined. A second model is developed that is similar to the first in terms of breakdown dynamics, but uses coupled multipole expansions of the electrostatic potential centered at each particle to obtain a more computationally accurate and faster solution to the problem of determining the electric field at an arbitrary point in a random medium. This new algorithm allows extension of the model to three dimensions and treats conducting spherical inclusions as well as cylinders. Successful implementation of this algorithm relies on the use of analytical forms for off-centered expansions of cylindrical and spherical harmonics. Scaling arguments similar to those used in theories of phase transitions are developed for the breakdown field and these arguments are discussed in context with other theories that have been developed to explain the break-down behavior of random resistor and fuse networks. Finally, one of the scaling arguments is used to predict the breakdown field for some samples of solid fuel rocket propellant tested at the China Lake Naval Weapons Center and is found to compare quite well with the experimentally measured breakdown fields.
Mummah, Sarah; Robinson, Thomas N; Mathur, Maya; Farzinkhou, Sarah; Sutton, Stephen; Gardner, Christopher D
2017-09-15
Mobile applications (apps) have been heralded as transformative tools to deliver behavioral health interventions at scale, but few have been tested in rigorous randomized controlled trials. We tested the effect of a mobile app to increase vegetable consumption among overweight adults attempting weight loss maintenance. Overweight adults (n=135) aged 18-50 years with BMI=28-40 kg/m 2 near Stanford, CA were recruited from an ongoing 12-month weight loss trial (parent trial) and randomly assigned to either the stand-alone, theory-based Vegethon mobile app (enabling goal setting, self-monitoring, and feedback and using "process motivators" including fun, surprise, choice, control, social comparison, and competition) or a wait-listed control condition. The primary outcome was daily vegetables servings, measured by an adapted Harvard food frequency questionnaire (FFQ) 8 weeks post-randomization. Daily vegetable servings from 24-hour dietary recalls, administered by trained, certified, and blinded interviewers 5 weeks post-randomization, was included as a secondary outcome. All analyses were conducted according to principles of intention-to-treat. Daily vegetable consumption was significantly greater in the intervention versus control condition for both measures (adjusted mean difference: 2.0 servings; 95% CI: 0.1, 3.8, p=0.04 for FFQ; and 1.0 servings; 95% CI: 0.2, 1.9; p=0.02 for 24-hour recalls). Baseline vegetable consumption was a significant moderator of intervention effects (p=0.002) in which effects increased as baseline consumption increased. These results demonstrate the efficacy of a mobile app to increase vegetable consumption among overweight adults. Theory-based mobile interventions may present a low-cost, scalable, and effective approach to improving dietary behaviors and preventing associated chronic diseases. ClinicalTrials.gov NCT01826591. Registered 27 March 2013.
Analysis of entropy extraction efficiencies in random number generation systems
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu
2016-05-01
Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.
NASA Astrophysics Data System (ADS)
Tibell, Lena A. E.; Harms, Ute
2017-11-01
Modern evolutionary theory is both a central theory and an integrative framework of the life sciences. This is reflected in the common references to evolution in modern science education curricula and contexts. In fact, evolution is a core idea that is supposed to support biology learning by facilitating the organization of relevant knowledge. In addition, evolution can function as a pivotal link between concepts and highlight similarities in the complexity of biological concepts. However, empirical studies in many countries have for decades identified deficiencies in students' scientific understanding of evolution mainly focusing on natural selection. Clearly, there are major obstacles to learning natural selection, and we argue that to overcome them, it is essential to address explicitly the general abstract concepts that underlie the biological processes, e.g., randomness or probability. Hence, we propose a two-dimensional framework for analyzing and structuring teaching of natural selection. The first—purely biological—dimension embraces the three main principles variation, heredity, and selection structured in nine key concepts that form the core idea of natural selection. The second dimension encompasses four so-called thresholds, i.e., general abstract and/or non-perceptual concepts: randomness, probability, spatial scales, and temporal scales. We claim that both of these dimensions must be continuously considered, in tandem, when teaching evolution in order to allow development of a meaningful understanding of the process. Further, we suggest that making the thresholds tangible with the aid of appropriate kinds of visualizations will facilitate grasping of the threshold concepts, and thus, help learners to overcome the difficulties in understanding the central theory of life.
NASA Astrophysics Data System (ADS)
Ohlídal, Ivan; Vohánka, Jiří; Čermák, Martin; Franta, Daniel
2017-10-01
The modification of the effective medium approximation for randomly microrough surfaces covered by very thin overlayers based on inhomogeneous fictitious layers is formulated. The numerical analysis of this modification is performed using simulated ellipsometric data calculated using the Rayleigh-Rice theory. The system used to perform this numerical analysis consists of a randomly microrough silicon single crystal surface covered with a SiO2 overlayer. A comparison to the effective medium approximation based on homogeneous fictitious layers is carried out within this numerical analysis. For ellipsometry of the system mentioned above the possibilities and limitations of both the effective medium approximation approaches are discussed. The results obtained by means of the numerical analysis are confirmed by the ellipsometric characterization of two randomly microrough silicon single crystal substrates covered with native oxide overlayers. It is shown that the effective medium approximation approaches for this system exhibit strong deficiencies compared to the Rayleigh-Rice theory. The practical consequences implied by these results are presented. The results concerning the random microroughness are verified by means of measurements performed using atomic force microscopy.
NASA Astrophysics Data System (ADS)
Jamaluddin, Muzhar Bin
The Boson Expansion Theory of Kishimoto and Tamura has proved to be very successful in describing quadrupole collective motions in even-even nuclei. This theory, however, involves a complicated transformation from the Tamm-Dancoff phonons to the phonons of the Random Phase Approximation. In this thesis a Boson Expansion formalism, derived directly from the Random Phase Approximation and set forth by Pedracchi and Tamura, is used to derive the boson forms of the nuclear Hamiltonian and the electromagnetic transition operator. Detailed discussions of the formalism of Pedrocchi and Tamura and its extension needed to perform realistic calculations are presented. The technique used to deriving the boson forms and the formulae used in the calculations are also given a thorough treatment to demonstrate the simplicity of this approach. Finally, the theory is tested by applying it to calculate the energy levels and some electromagnetic properties of the Samarium isotopes. The results show that the present theory is capable of describing the range of behavior from a vibrational to a rotational character of the Samarium isotopes as good as the previous theory.
Localization Transition Induced by Learning in Random Searches
NASA Astrophysics Data System (ADS)
Falcón-Cortés, Andrea; Boyer, Denis; Giuggioli, Luca; Majumdar, Satya N.
2017-10-01
We solve an adaptive search model where a random walker or Lévy flight stochastically resets to previously visited sites on a d -dimensional lattice containing one trapping site. Because of reinforcement, a phase transition occurs when the resetting rate crosses a threshold above which nondiffusive stationary states emerge, localized around the inhomogeneity. The threshold depends on the trapping strength and on the walker's return probability in the memoryless case. The transition belongs to the same class as the self-consistent theory of Anderson localization. These results show that similarly to many living organisms and unlike the well-studied Markovian walks, non-Markov movement processes can allow agents to learn about their environment and promise to bring adaptive solutions in search tasks.
van Rossum, Joris
2006-01-01
In its essence, the explanatory potential of the theory of natural selection is based on the iterative process of random production and variation, and subsequent non-random, directive selection. It is shown that within this explanatory framework, there is no place for the explanation of sexual reproduction. Thus in Darwinistic literature, sexual reproduction - one of nature's most salient characteristics - is often either assumed or ignored, but not explained. This fundamental and challenging gap within a complete naturalistic understanding of living beings calls for the need of a cybernetic account for sexual reproduction, meaning an understanding of the dynamic and creative potential of living beings to continuously and autonomously produce new organisms with unique and specific constellations.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
NASA Technical Reports Server (NTRS)
Noll, Thomas E.
1990-01-01
The paper describes recent accomplishments and current research projects along four main thrusts in aeroservoelasticity at NASA Langley. One activity focuses on enhancing the modeling and analysis procedures to accurately predict aeroservoelastic interactions. Improvements to the minimum-state method of approximating unsteady aerodynamics are shown to provide precise low-order models for design and simulation tasks. Recent extensions in aerodynamic correction-factor methodology are also described. With respect to analysis procedures, the paper reviews novel enhancements to matched filter theory and random process theory for predicting the critical gust profile and the associated time-correlated gust loads for structural design considerations. Two research projects leading towards improved design capability are also summarized: (1) an integrated structure/control design capability and (2) procedures for obtaining low-order robust digital control laws for aeroelastic applications.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
NASA Astrophysics Data System (ADS)
Sato, Haruo; Hayakawa, Toshihiko
2014-10-01
Short-period seismograms of earthquakes are complex especially beneath volcanoes, where the S wave mean free path is short and low velocity bodies composed of melt or fluid are expected in addition to random velocity inhomogeneities as scattering sources. Resonant scattering inherent in a low velocity body shows trap and release of waves with a delay time. Focusing of the delay time phenomenon, we have to consider seriously multiple resonant scattering processes. Since wave phases are complex in such a scattering medium, the radiative transfer theory has been often used to synthesize the variation of mean square (MS) amplitude of waves; however, resonant scattering has not been well adopted in the conventional radiative transfer theory. Here, as a simple mathematical model, we study the sequence of isotropic resonant scattering of a scalar wavelet by low velocity spheres at low frequencies, where the inside velocity is supposed to be low enough. We first derive the total scattering cross-section per time for each order of scattering as the convolution kernel representing the decaying scattering response. Then, for a random and uniform distribution of such identical resonant isotropic scatterers, we build the propagator of the MS amplitude by using causality, a geometrical spreading factor and the scattering loss. Using those propagators and convolution kernels, we formulate the radiative transfer equation for a spherically impulsive radiation from a point source. The synthesized MS amplitude time trace shows a dip just after the direct arrival and a delayed swelling, and then a decaying tail at large lapse times. The delayed swelling is a prominent effect of resonant scattering. The space distribution of synthesized MS amplitude shows a swelling near the source region in space, and it becomes a bell shape like a diffusion solution at large lapse times.
Knowlden, Adam P; Sharma, Manoj; Cottrell, Randall R; Wilson, Bradley R A; Johnson, Marcus Lee
2015-04-01
The family and home environment is an influential antecedent of childhood obesity. The purpose of this study was to pilot test The Enabling Mothers to Prevent Pediatric Obesity through Web-Based Education and Reciprocal Determinism (EMPOWER) intervention; a newly developed, theory-based, online program for prevention of childhood obesity. The two-arm, parallel group, randomized, participant-blinded trial targeted mothers with children between 4 and 6 years of age. Measures were collected at baseline, 4 weeks, and 8 weeks to evaluate programmatic effects on constructs of social cognitive theory (SCT) and obesity-related behaviors. Process evaluation transpired concurrently with each intervention session. Fifty-seven participants were randomly assigned to receive either experimental EMPOWER (n = 29) or active control Healthy Lifestyles (n = 28) intervention. Significant main effects were identified for child physical activity, sugar-free beverage consumption, and screen time, indicating that both groups improved in these behaviors. A significant group-by-time interaction was detected for child fruit and vegetable (FV) consumption as well as the SCT construct of environment in the EMPOWER cohort. An increase of 1.613 cups of FVs (95% confidence interval = [0.698, 2.529]) was found in the experimental group, relative to the active control group. Change score analysis found changes in the home environment accounted for 31.4% of the change in child FV intake for the experimental group. Child physical activity, sugar-free beverage consumption, and screen time improved in both groups over the course of the trial. Only the theory-based intervention was efficacious in increasing child FV consumption. The EMPOWER program was robust for inducing change in the home environment leading to an increase in child FV intake (Cohen's f = 0.160). © 2014 Society for Public Health Education.
The glassy random laser: replica symmetry breaking in the intensity fluctuations of emission spectra
Antenucci, Fabrizio; Crisanti, Andrea; Leuzzi, Luca
2015-01-01
The behavior of a newly introduced overlap parameter, measuring the correlation between intensity fluctuations of waves in random media, is analyzed in different physical regimes, with varying amount of disorder and non-linearity. This order parameter allows to identify the laser transition in random media and describes its possible glassy nature in terms of emission spectra data, the only data so far accessible in random laser measurements. The theoretical analysis is performed in terms of the complex spherical spin-glass model, a statistical mechanical model describing the onset and the behavior of random lasers in open cavities. Replica Symmetry Breaking theory allows to discern different kinds of randomness in the high pumping regime, including the most complex and intriguing glassy randomness. The outcome of the theoretical study is, eventually, compared to recent intensity fluctuation overlap measurements demonstrating the validity of the theory and providing a straightforward interpretation of qualitatively different spectral behaviors in different random lasers. PMID:26616194
ERIC Educational Resources Information Center
Clark, Heddy Kovach; Ringwalt, Chris L.; Shamblen, Stephen R.; Hanley, Sean M.
2011-01-01
Using a randomized controlled effectiveness trial, we examined the effects of Project SUCCESS on a range of secondary outcomes, including the program's mediating variables. Project SUCCESS, which is based both on the Theory of Reasoned Action and on Cognitive Behavior Theory, is a school-based substance use prevention program that targets…
ERIC Educational Resources Information Center
Hedeker, Donald; And Others
1996-01-01
Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example, M. Fishbein and I. Ajzen's theory of reasoned action is examined. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate individual influences…
Group field theory and tensor networks: towards a Ryu–Takayanagi formula in full quantum gravity
NASA Astrophysics Data System (ADS)
Chirco, Goffredo; Oriti, Daniele; Zhang, Mingyi
2018-06-01
We establish a dictionary between group field theory (thus, spin networks and random tensors) states and generalized random tensor networks. Then, we use this dictionary to compute the Rényi entropy of such states and recover the Ryu–Takayanagi formula, in two different cases corresponding to two different truncations/approximations, suggested by the established correspondence.
Effect of wave localization on plasma instabilities. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Levedahl, William Kirk
1987-01-01
The Anderson model of wave localization in random media is involved to study the effect of solar wind density turbulence on plasma processes associated with the solar type III radio burst. ISEE-3 satellite data indicate that a possible model for the type III process is the parametric decay of Langmuir waves excited by solar flare electron streams into daughter electromagnetic and ion acoustic waves. The threshold for this instability, however, is much higher than observed Langmuir wave levels because of rapid wave convection of the transverse electromagnetic daughter wave in the case where the solar wind is assumed homogeneous. Langmuir and transverse waves near critical density satisfy the Ioffe-Reigel criteria for wave localization in the solar wind with observed density fluctuations -1 percent. Numerical simulations of wave propagation in random media confirm the localization length predictions of Escande and Souillard for stationary density fluctations. For mobile density fluctuations localized wave packets spread at the propagation velocity of the density fluctuations rather than the group velocity of the waves. Computer simulations using a linearized hybrid code show that an electron beam will excite localized Langmuir waves in a plasma with density turbulence. An action principle approach is used to develop a theory of non-linear wave processes when waves are localized. A theory of resonant particles diffusion by localized waves is developed to explain the saturation of the beam-plasma instability. It is argued that localization of electromagnetic waves will allow the instability threshold to be exceeded for the parametric decay discussed above.
Theory of activated glassy dynamics in randomly pinned fluids.
Phan, Anh D; Schweizer, Kenneth S
2018-02-07
We generalize the force-level, microscopic, Nonlinear Langevin Equation (NLE) theory and its elastically collective generalization [elastically collective nonlinear Langevin equation (ECNLE) theory] of activated dynamics in bulk spherical particle liquids to address the influence of random particle pinning on structural relaxation. The simplest neutral confinement model is analyzed for hard spheres where there is no change of the equilibrium pair structure upon particle pinning. As the pinned fraction grows, cage scale dynamical constraints are intensified in a manner that increases with density. This results in the mobile particles becoming more transiently localized, with increases of the jump distance, cage scale barrier, and NLE theory mean hopping time; subtle changes of the dynamic shear modulus are predicted. The results are contrasted with recent simulations. Similarities in relaxation behavior are identified in the dynamic precursor regime, including a roughly exponential, or weakly supra-exponential, growth of the alpha time with pinning fraction and a reduction of dynamic fragility. However, the increase of the alpha time with pinning predicted by the local NLE theory is too small and severely so at very high volume fractions. The strong deviations are argued to be due to the longer range collective elasticity aspect of the problem which is expected to be modified by random pinning in a complex manner. A qualitative physical scenario is offered for how the three distinct aspects that quantify the elastic barrier may change with pinning. ECNLE theory calculations of the alpha time are then presented based on the simplest effective-medium-like treatment for how random pinning modifies the elastic barrier. The results appear to be consistent with most, but not all, trends seen in recent simulations. Key open problems are discussed with regard to both theory and simulation.
Theory of activated glassy dynamics in randomly pinned fluids
NASA Astrophysics Data System (ADS)
Phan, Anh D.; Schweizer, Kenneth S.
2018-02-01
We generalize the force-level, microscopic, Nonlinear Langevin Equation (NLE) theory and its elastically collective generalization [elastically collective nonlinear Langevin equation (ECNLE) theory] of activated dynamics in bulk spherical particle liquids to address the influence of random particle pinning on structural relaxation. The simplest neutral confinement model is analyzed for hard spheres where there is no change of the equilibrium pair structure upon particle pinning. As the pinned fraction grows, cage scale dynamical constraints are intensified in a manner that increases with density. This results in the mobile particles becoming more transiently localized, with increases of the jump distance, cage scale barrier, and NLE theory mean hopping time; subtle changes of the dynamic shear modulus are predicted. The results are contrasted with recent simulations. Similarities in relaxation behavior are identified in the dynamic precursor regime, including a roughly exponential, or weakly supra-exponential, growth of the alpha time with pinning fraction and a reduction of dynamic fragility. However, the increase of the alpha time with pinning predicted by the local NLE theory is too small and severely so at very high volume fractions. The strong deviations are argued to be due to the longer range collective elasticity aspect of the problem which is expected to be modified by random pinning in a complex manner. A qualitative physical scenario is offered for how the three distinct aspects that quantify the elastic barrier may change with pinning. ECNLE theory calculations of the alpha time are then presented based on the simplest effective-medium-like treatment for how random pinning modifies the elastic barrier. The results appear to be consistent with most, but not all, trends seen in recent simulations. Key open problems are discussed with regard to both theory and simulation.
A theory of eu-estrogenemia: a unifying concept
Turner, Ralph J.; Kerber, Irwin J.
2017-01-01
Abstract Objective: The aim of the study was to propose a unifying theory for the role of estrogen in postmenopausal women through examples in basic science, randomized controlled trials, observational studies, and clinical practice. Methods: Review and evaluation of the literature relating to estrogen. Discussion: The role of hormone therapy and ubiquitous estrogen receptors after reproductive senescence gains insight from basic science models. Observational studies and individualized patient care in clinical practice may show outcomes that are not reproduced in randomized clinical trials. The understanding gained from the timing hypothesis for atherosclerosis, the critical window theory in neurosciences, randomized controlled trials, and numerous genomic and nongenomic actions of estrogen discovered in basic science provides new explanations to clinical challenges that practitioners face. Consequences of a hypo-estrogenemic duration in women's lives are poorly understood. The Study of Women Across the Nation suggests its magnitude is greater than was previously acknowledged. We propose that the healthy user bias was the result of surgical treatment (hysterectomy with oophorectomy) for many gynecological maladies followed by pharmacological and physiological doses of estrogen to optimize patient quality of life. The past decade of research has begun to demonstrate the role of estrogen in homeostasis. Conclusions: The theory of eu-estrogenemia provides a robust framework to unify the timing hypothesis, critical window theory, randomized controlled trials, the basic science of estrogen receptors, and clinical observations of patients over the past five decades. PMID:28562489
Phenotypic switching of populations of cells in a stochastic environment
NASA Astrophysics Data System (ADS)
Hufton, Peter G.; Lin, Yen Ting; Galla, Tobias
2018-02-01
In biology phenotypic switching is a common bet-hedging strategy in the face of uncertain environmental conditions. Existing mathematical models often focus on periodically changing environments to determine the optimal phenotypic response. We focus on the case in which the environment switches randomly between discrete states. Starting from an individual-based model we derive stochastic differential equations to describe the dynamics, and obtain analytical expressions for the mean instantaneous growth rates based on the theory of piecewise-deterministic Markov processes. We show that optimal phenotypic responses are non-trivial for slow and intermediate environmental processes, and systematically compare the cases of periodic and random environments. The best response to random switching is more likely to be heterogeneity than in the case of deterministic periodic environments, net growth rates tend to be higher under stochastic environmental dynamics. The combined system of environment and population of cells can be interpreted as host-pathogen interaction, in which the host tries to choose environmental switching so as to minimise growth of the pathogen, and in which the pathogen employs a phenotypic switching optimised to increase its growth rate. We discuss the existence of Nash-like mutual best-response scenarios for such host-pathogen games.
Student conceptions of natural selection and its role in evolution
NASA Astrophysics Data System (ADS)
Bishop, Beth A.; Anderson, Charles W.
Pretests and posttests on the topic of evolution by natural selection were administered to students in a college nonmajors' biology course. Analysis of test responses revealed that most students understood evolution as a process in which species respond to environmental conditions by changing gradually over time. Student thinking differed from accepted biological theory in that (a) changes in traits were attributed to a need-driven adaptive process rather than random genetic mutation and sexual recombination, (b) no role was assigned to variation on traits within a population or differences in reproductive success, and (c) traits were seen as gradually changing in all members of a population. Although students had taken an average of 1.9 years of previous biology courses, performance on the pretest was uniformly low. There was no relationship between the amount of previous biology taken and either pretest or posttest performance. Belief in the truthfulness of evolutionary theory was also unrelated to either pretest or posttest performance. Course instruction using specially designed materials was moderately successful in improving students' understanding of the evolutionary process.
Mundy, Matthew E
2014-01-01
Explanations for the cognitive basis of the Müller-Lyer illusion are still frustratingly mixed. To date, Day's (1989) theory of perceptual compromise has received little empirical attention. In this study, we examine the merit of Day's hypothesis for the Müller-Lyer illusion by biasing participants toward global or local visual processing through exposure to Navon (1977) stimuli, which are known to alter processing level preference for a short time. Participants (N = 306) were randomly allocated to global, local, or control conditions. Those in global or local conditions were exposed to Navon stimuli for 5 min and participants were required to report on the global or local stimulus features, respectively. Subsequently, participants completed a computerized Müller-Lyer experiment where they adjusted the length of a line to match an illusory-figure. The illusion was significantly stronger for participants with a global bias, and significantly weaker for those with a local bias, compared with the control condition. These findings provide empirical support for Day's "conflicting cues" theory of perceptual compromise in the Müller-Lyer illusion.
Asymmetrically dominated choice problems, the isolation hypothesis and random incentive mechanisms.
Cox, James C; Sadiraj, Vjollca; Schmidt, Ulrich
2014-01-01
This paper presents an experimental study of the random incentive mechanisms which are a standard procedure in economic and psychological experiments. Random incentive mechanisms have several advantages but are incentive-compatible only if responses to the single tasks are independent. This is true if either the independence axiom of expected utility theory or the isolation hypothesis of prospect theory holds. We present a simple test of this in the context of choice under risk. In the baseline (one task) treatment we observe risk behavior in a given choice problem. We show that by integrating a second, asymmetrically dominated choice problem in a random incentive mechanism risk behavior can be manipulated systematically. This implies that the isolation hypothesis is violated and the random incentive mechanism does not elicit true preferences in our example.
Simulating propagation of coherent light in random media using the Fredholm type integral equation
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2017-06-01
Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.
Spectrum of the Wilson Dirac operator at finite lattice spacings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akemann, G.; Damgaard, P. H.; Splittorff, K.
2011-04-15
We consider the effect of discretization errors on the microscopic spectrum of the Wilson Dirac operator using both chiral perturbation theory and chiral random matrix theory. A graded chiral Lagrangian is used to evaluate the microscopic spectral density of the Hermitian Wilson Dirac operator as well as the distribution of the chirality over the real eigenvalues of the Wilson Dirac operator. It is shown that a chiral random matrix theory for the Wilson Dirac operator reproduces the leading zero-momentum terms of Wilson chiral perturbation theory. All results are obtained for a fixed index of the Wilson Dirac operator. The low-energymore » constants of Wilson chiral perturbation theory are shown to be constrained by the Hermiticity properties of the Wilson Dirac operator.« less
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
An analysis of the metabolic theory of the origin of the genetic code
NASA Technical Reports Server (NTRS)
Amirnovin, R.; Bada, J. L. (Principal Investigator)
1997-01-01
A computer program was used to test Wong's coevolution theory of the genetic code. The codon correlations between the codons of biosynthetically related amino acids in the universal genetic code and in randomly generated genetic codes were compared. It was determined that many codon correlations are also present within random genetic codes and that among the random codes there are always several which have many more correlations than that found in the universal code. Although the number of correlations depends on the choice of biosynthetically related amino acids, the probability of choosing a random genetic code with the same or greater number of codon correlations as the universal genetic code was found to vary from 0.1% to 34% (with respect to a fairly complete listing of related amino acids). Thus, Wong's theory that the genetic code arose by coevolution with the biosynthetic pathways of amino acids, based on codon correlations between biosynthetically related amino acids, is statistical in nature.
Chang, Zhiwei; Halle, Bertil
2013-10-14
In complex biological or colloidal samples, magnetic relaxation dispersion (MRD) experiments using the field-cycling technique can characterize molecular motions on time scales ranging from nanoseconds to microseconds, provided that a rigorous theory of nuclear spin relaxation is available. In gels, cross-linked proteins, and biological tissues, where an immobilized macromolecular component coexists with a mobile solvent phase, nuclear spins residing in solvent (or cosolvent) species relax predominantly via exchange-mediated orientational randomization (EMOR) of anisotropic nuclear (electric quadrupole or magnetic dipole) couplings. The physical or chemical exchange processes that dominate the MRD typically occur on a time scale of microseconds or longer, where the conventional perturbation theory of spin relaxation breaks down. There is thus a need for a more general relaxation theory. Such a theory, based on the stochastic Liouville equation (SLE) for the EMOR mechanism, is available for a single quadrupolar spin I = 1. Here, we present the corresponding theory for a dipole-coupled spin-1/2 pair. To our knowledge, this is the first treatment of dipolar MRD outside the motional-narrowing regime. Based on an analytical solution of the spatial part of the SLE, we show how the integral longitudinal relaxation rate can be computed efficiently. Both like and unlike spins, with selective or non-selective excitation, are treated. For the experimentally important dilute regime, where only a small fraction of the spin pairs are immobilized, we obtain simple analytical expressions for the auto-relaxation and cross-relaxation rates which generalize the well-known Solomon equations. These generalized results will be useful in biophysical studies, e.g., of intermittent protein dynamics. In addition, they represent a first step towards a rigorous theory of water (1)H relaxation in biological tissues, which is a prerequisite for unravelling the molecular basis of soft-tissue contrast in clinical magnetic resonance imaging.
GEOSTATISTICAL SAMPLING DESIGNS FOR HAZARDOUS WASTE SITES
This chapter discusses field sampling design for environmental sites and hazardous waste sites with respect to random variable sampling theory, Gy's sampling theory, and geostatistical (kriging) sampling theory. The literature often presents these sampling methods as an adversari...
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...
2015-11-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
A random walk model for evaluating clinical trials involving serial observations.
Hopper, J L; Young, G P
1988-05-01
For clinical trials where the variable of interest is ordered and categorical (for example, disease severity, symptom scale), and where measurements are taken at intervals, it might be possible to achieve a greater discrimination between the efficacy of treatments by modelling each patient's progress as a stochastic process. The random walk is a simple, easily interpreted model that can be fitted by maximum likelihood using a maximization routine with inference based on standard likelihood theory. In general the model can allow for randomly censored data, incorporates measured prognostic factors, and inference is conditional on the (possibly non-random) allocation of patients. Tests of fit and of model assumptions are proposed, and application to two therapeutic trials of gastroenterological disorders are presented. The model gave measures of the rate of, and variability in, improvement for patients under different treatments. A small simulation study suggested that the model is more powerful than considering the difference between initial and final scores, even when applied to data generated by a mechanism other than the random walk model assumed in the analysis. It thus provides a useful additional statistical method for evaluating clinical trials.
Theory of Mind Training in Children with Autism: A Randomized Controlled Trial
ERIC Educational Resources Information Center
Begeer, Sander; Gevers, Carolien; Clifford, Pamela; Verhoeve, Manja; Kat, Kirstin; Hoddenbach, Elske; Boer, Frits
2011-01-01
Many children with Autism Spectrum Disorders (ASD) participate in social skills or Theory of Mind (ToM) treatments. However, few studies have shown evidence for their effectiveness. The current study used a randomized controlled design to test the effectiveness of a 16-week ToM treatment in 8-13 year old children with ASD and normal IQs (n = 40).…
Costello, Fintan; Watts, Paul
2016-01-01
A standard assumption in much of current psychology is that people do not reason about probability using the rules of probability theory but instead use various heuristics or "rules of thumb," which can produce systematic reasoning biases. In Costello and Watts (2014), we showed that a number of these biases can be explained by a model where people reason according to probability theory but are subject to random noise. More importantly, that model also predicted agreement with probability theory for certain expressions that cancel the effects of random noise: Experimental results strongly confirmed this prediction, showing that probabilistic reasoning is simultaneously systematically biased and "surprisingly rational." In their commentaries on that paper, both Crupi and Tentori (2016) and Nilsson, Juslin, and Winman (2016) point to various experimental results that, they suggest, our model cannot explain. In this reply, we show that our probability theory plus noise model can in fact explain every one of the results identified by these authors. This gives a degree of additional support to the view that people's probability judgments embody the rational rules of probability theory and that biases in those judgments can be explained as simply effects of random noise. (c) 2015 APA, all rights reserved).
Hoddenbach, Elske; Koot, Hans M; Clifford, Pamela; Gevers, Carolien; Clauser, Cassandra; Boer, Frits; Begeer, Sander
2012-11-09
Having a 'theory of mind', or having the ability to attribute mental states to oneself or others, is considered one of the most central domains of impairment among children with an autism spectrum disorder (ASD). Many interventions focus on improving theory of mind skills in children with ASD. Nonetheless, the empirical evidence for the effect of these interventions is limited. The main goal of this study is to examine the effectiveness of a short theory of mind intervention for children with ASD. A second objective is to determine which subgroups within the autism spectrum profit most from the intervention. This study is a randomized controlled trial. One hundred children with ASD, aged 7 to 12 years will be randomly assigned to an intervention or a waiting list control group. Outcome measures include the completion of theory of mind and emotion understanding tasks, and parent and teacher questionnaires on children's social skills. Follow-up data for the intervention group will be collected 6 months after the interventions. This study evaluates the efficacy of a theory of mind intervention for children with ASD. Hypotheses, strengths, and limitations of the study are discussed. Netherlands Trial Register NTR2327.
Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available.
Hernán, Miguel A; Robins, James M
2016-04-15
Ideally, questions about comparative effectiveness or safety would be answered using an appropriately designed and conducted randomized experiment. When we cannot conduct a randomized experiment, we analyze observational data. Causal inference from large observational databases (big data) can be viewed as an attempt to emulate a randomized experiment-the target experiment or target trial-that would answer the question of interest. When the goal is to guide decisions among several strategies, causal analyses of observational data need to be evaluated with respect to how well they emulate a particular target trial. We outline a framework for comparative effectiveness research using big data that makes the target trial explicit. This framework channels counterfactual theory for comparing the effects of sustained treatment strategies, organizes analytic approaches, provides a structured process for the criticism of observational studies, and helps avoid common methodologic pitfalls. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Statistical mechanics of complex economies
NASA Astrophysics Data System (ADS)
Bardoscia, Marco; Livan, Giacomo; Marsili, Matteo
2017-04-01
In the pursuit of ever increasing efficiency and growth, our economies have evolved to remarkable degrees of complexity, with nested production processes feeding each other in order to create products of greater sophistication from less sophisticated ones, down to raw materials. The engine of such an expansion have been competitive markets that, according to general equilibrium theory (GET), achieve efficient allocations under specific conditions. We study large random economies within the GET framework, as templates of complex economies, and we find that a non-trivial phase transition occurs: the economy freezes in a state where all production processes collapse when either the number of primary goods or the number of available technologies fall below a critical threshold. As in other examples of phase transitions in large random systems, this is an unintended consequence of the growth in complexity. Our findings suggest that the Industrial Revolution can be regarded as a sharp transition between different phases, but also imply that well developed economies can collapse if too many intermediate goods are introduced.
Ganging up or sticking together? Group processes and children's responses to text-message bullying.
Jones, Siân E; Manstead, Antony S R; Livingstone, Andrew G
2011-02-01
Drawing on social identity theory and intergroup emotion theory (IET), we examined group processes underlying bullying behaviour. Children were randomly assigned to one of three groups: a perpetrator's group, a target's group, or a third party group. They then read a gender-consistent scenario in which the norm of the perpetrator's group (to be kind or unkind towards others) was manipulated, and an instance of cyberbullying between the perpetrator's group and a member of the target's group was described. It was found that group membership, group norms, and the proposed antecedents of the group-based emotions of pride, shame, and anger (but not guilt) influenced group-based emotions and action tendencies in ways predicted by social identity and IET. The results underline the importance of understanding group-level emotional reactions when it comes to tackling bullying, and show that being part of a group can be helpful in overcoming the negative effects of bullying. ©2010 The British Psychological Society.
Gaston, Anca; Prapavessis, Harry
2014-04-01
Despite the benefits of exercise during pregnancy, many expectant mothers are inactive. This study examined whether augmenting a protection motivation theory (PMT) intervention with a Health Action Process Approach can enhance exercise behavior change among pregnant women. Sixty inactive pregnant women were randomly assigned to one of three treatment groups: PMT-only, PMT + action-planning, and PMT + action-and-coping-planning. Week-long objective (accelerometer) and subjective (self-report) exercise measures were collected at baseline, and at 1- and 4-weeks post-intervention. Repeated-measures ANOVAs demonstrated that while all participants reported increased exercise from baseline to 1-week post-intervention, participants in both planning groups were significantly more active (p < .001) than those in the PMT-only group by 4-weeks post-intervention (η (2) = .13 and .15 for accelerometer and self-report data, respectively). In conclusion, augmenting a PMT intervention with action or action-and-coping-planning can enhance exercise behavior change in pregnant women.
Zhao, Fang-Fang; Suhonen, Riitta; Koskinen, Sanna; Leino-Kilpi, Helena
2017-04-01
To synthesize the effects of theory-based self-management educational interventions on patients with type 2 diabetes (T2DM) in randomized controlled trials. Type 2 diabetes is a common chronic disease causing complications that put a heavy burden on society and reduce the quality of life of patients. Good self-management of diabetes can prevent complications and improve the quality of life of T2DM patients. Systematic review with meta-analysis of randomized controlled trials following Cochrane methods. A literature search was carried out in the MEDLINE, EMBASE, CINAHL, PSYCINFO, and Web of Science databases (1980-April 2015). The risk of bias of these eligible studies was assessed independently by two authors using the Cochrane Collaboration's tool. The Publication bias of the main outcomes was examined. Statistical heterogeneity and random-effects model were used for meta-analysis. Twenty studies with 5802 participants met the inclusion criteria. The interventions in the studies were based on one or more theories which mostly belong to mid-range theories. The pooled main outcomes by random-effects model showed significant improvements in HbA1c, self-efficacy, and diabetes knowledge, but not in BMI. As for quality of life, no conclusions can be drawn as the pooled outcome became the opposite with reduced heterogeneity after one study was excluded. No significant publication bias was found in the main outcomes. To get theory-based interventions to produce more effects, the role of patients should be more involved and stronger and the education team should be trained beyond the primary preparation for the self-management education program. © 2016 John Wiley & Sons Ltd.
Khrennikov, Andrei
2011-09-01
We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
A Self-Critique of Self-Organized Criticality in Astrophysics
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.
2015-08-01
The concept of ``self-organized criticality'' (SOC) was originally proposed as an explanation of 1/f-noise by Bak, Tang, and Wiesenfeld (1987), but turned out to have a far broader significance for scale-free nonlinear energy dissipation processes occurring in the entire universe. Over the last 30 years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into numerical SOC toy models. The novel applications stimulated also vigorous debates about the discrimination between SOC-related and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC models applied to astrophysical observations, attempt to describe what physics can be captured by SOC models, and offer a critique of weaknesses and strengths in existing SOC models.
A Self-Critique of Self-Organized Criticality in Astrophysics
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.
The concept of ``self-organized criticality'' (SOC) was originally proposed as an explanation of 1/f-noise by Bak, Tang, and Wiesenfeld (1987), but turned out to have a far broader significance for scale-free nonlinear energy dissipation processes occurring in the entire universe. Over the last 30 years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into numerical SOC toy models. The novel applications stimulated also vigorous debates about the discrimination between SOC-related and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC models applied to astrophysical observations, attempt to describe what physics can be captured by SOC models, and offer a critique of weaknesses and strengths in existing SOC models.
25 Years of Self-Organized Criticality: Solar and Astrophysics
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Crosby, Norma B.; Dimitropoulou, Michaila; Georgoulis, Manolis K.; Hergarten, Stefan; McAteer, James; Milovanov, Alexander V.; Mineshige, Shin; Morales, Laura; Nishizuka, Naoto; Pruessner, Gunnar; Sanchez, Raul; Sharma, A. Surja; Strugarek, Antoine; Uritsky, Vadim
2016-01-01
Shortly after the seminal paper "Self-Organized Criticality: An explanation of 1/ f noise" by Bak et al. (1987), the idea has been applied to solar physics, in "Avalanches and the Distribution of Solar Flares" by Lu and Hamilton (1991). In the following years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into the numerical SOC toy models, such as the discretization of magneto-hydrodynamics (MHD) processes. The novel applications stimulated also vigorous debates about the discrimination between SOC models, SOC-like, and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC studies from the last 25 years and highlight new trends, open questions, and future challenges, as discussed during two recent ISSI workshops on this theme.
Random-walk approach to the d -dimensional disordered Lorentz gas
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2008-02-01
A correlated random walk approach to diffusion is applied to the disordered nonoverlapping Lorentz gas. By invoking the Lu-Torquato theory for chord-length distributions in random media [J. Chem. Phys. 98, 6472 (1993)], an analytic expression for the diffusion constant in arbitrary number of dimensions d is obtained. The result corresponds to an Enskog-like correction to the Boltzmann prediction, being exact in the dilute limit, and better or nearly exact in comparison to renormalized kinetic theory predictions for all allowed densities in d=2,3 . Extensive numerical simulations were also performed to elucidate the role of the approximations involved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jamaluddin, M.B.
1986-01-01
The Boson Expansion Theory of Kishimoto and Tamura has proved to be very successful in describing quadrupole collective motions in even-even nuclei. This theory, however, involves a complicated transformation from the Tamm-Dancoff phonons to the phonons of the random Phase Approximation. In this thesis a Boson Expansion formalism, derived directly from the Random Phase Approximation and set forth by Pedracchi and Tamura, is used to derive the boson forms of the nuclear Hamiltonian and the electromagnetic transition operator. Detailed discussions of the formalism of Pedrocchi and Tamura and its extension needed to perform realistic calculations are presented. The technique usedmore » to deriving the boson forms and the formulae used in the calculations are also given a thorough treatment to demonstrate the simplicity of this approach. Finally, the theory is tested by applying it to calculate the energy levels and some electromagnetic properties of the Samarium isotopes. The results show that the present theory is capable of describing the range of behavior from a vibrational to a rotational character of the Samarium isotopes as well as the previous theory.« less
Generating and using truly random quantum states in Mathematica
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2012-01-01
The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.
Experimentally generated randomness certified by the impossibility of superluminal signals.
Bierhorst, Peter; Knill, Emanuel; Glancy, Scott; Zhang, Yanbao; Mink, Alan; Jordan, Stephen; Rommal, Andrea; Liu, Yi-Kai; Christensen, Bradley; Nam, Sae Woo; Stevens, Martin J; Shalm, Lynden K
2018-04-01
From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable 1-3 . For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity 1-11 . With recent technological developments, it is now possible to carry out such a loophole-free Bell test 12-14,22 . Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10 -12 . These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.
ERIC Educational Resources Information Center
de Veld, Danielle M. J.; Howlin, Patricia; Hoddenbach, Elske; Mulder, Fleur; Wolf, Imke; Koot, Hans M.; Lindauer, Ramón; Begeer, Sander
2017-01-01
This RCT investigated whether the effect of a Theory of Mind (ToM) intervention for children with ASD was moderated by parental education level and employment, family structure, and parental ASD. Children with autism aged 8-13 years (n = 136) were randomized over a waitlist control or treatment condition. At posttest, children in the treatment…
NASA Astrophysics Data System (ADS)
Rogotis, Savvas; Ioannidis, Dimosthenis; Tzovaras, Dimitrios; Likothanassis, Spiros
2015-04-01
The aim of this work is to present a novel approach for automatic recognition of suspicious activities in outdoor perimeter surveillance systems based on infrared video processing. Through the combination of size, speed and appearance based features, like the Center-Symmetric Local Binary Patterns, short-term actions are identified and serve as input, along with user location, for modeling target activities using the theory of Hidden Conditional Random Fields. HCRFs are used to directly link a set of observations to the most appropriate activity label and as such to discriminate high risk activities (e.g. trespassing) from zero risk activities (e.g loitering outside the perimeter). Experimental results demonstrate the effectiveness of our approach in identifying suspicious activities for video surveillance systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Correlations of RMT characteristic polynomials and integrability: Hermitean matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osipov, Vladimir Al., E-mail: Vladimir.Osipov@uni-due.d; Kanzieper, Eugene, E-mail: Eugene.Kanzieper@hit.ac.i; Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100
Integrable theory is formulated for correlation functions of characteristic polynomials associated with invariant non-Gaussian ensembles of Hermitean random matrices. By embedding the correlation functions of interest into a more general theory of {tau} functions, we (i) identify a zoo of hierarchical relations satisfied by {tau} functions in an abstract infinite-dimensional space and (ii) present a technology to translate these relations into hierarchically structured nonlinear differential equations describing the correlation functions of characteristic polynomials in the physical, spectral space. Implications of this formalism for fermionic, bosonic, and supersymmetric variations of zero-dimensional replica field theories are discussed at length. A particular emphasismore » is placed on the phenomenon of fermionic-bosonic factorisation of random-matrix-theory correlation functions.« less
Feynman-Kac formula for stochastic hybrid systems.
Bressloff, Paul C
2017-01-01
We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.
Free Vibration of Uncertain Unsymmetrically Laminated Beams
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Goyal, Vijay K.
2001-01-01
Monte Carlo Simulation and Stochastic FEA are used to predict randomness in the free vibration response of thin unsymmetrically laminated beams. For the present study, it is assumed that randomness in the response is only caused by uncertainties in the ply orientations. The ply orientations may become random or uncertain during the manufacturing process. A new 16-dof beam element, based on the first-order shear deformation beam theory, is used to study the stochastic nature of the natural frequencies. Using variational principles, the element stiffness matrix and mass matrix are obtained through analytical integration. Using a random sequence a large data set is generated, containing possible random ply-orientations. This data is assumed to be symmetric. The stochastic-based finite element model for free vibrations predicts the relation between the randomness in fundamental natural frequencies and the randomness in ply-orientation. The sensitivity derivatives are calculated numerically through an exact formulation. The squared fundamental natural frequencies are expressed in terms of deterministic and probabilistic quantities, allowing to determine how sensitive they are to variations in ply angles. The predicted mean-valued fundamental natural frequency squared and the variance of the present model are in good agreement with Monte Carlo Simulation. Results, also, show that variations between plus or minus 5 degrees in ply-angles can affect free vibration response of unsymmetrically and symmetrically laminated beams.
Recent Advances in Voltammetry
Batchelor-McAuley, Christopher; Kätelhön, Enno; Barnes, Edward O; Compton, Richard G; Laborda, Eduardo; Molina, Angela
2015-01-01
Recent progress in the theory and practice of voltammetry is surveyed and evaluated. The transformation over the last decade of the level of modelling and simulation of experiments has realised major advances such that electrochemical techniques can be fully developed and applied to real chemical problems of distinct complexity. This review focuses on the topic areas of: multistep electrochemical processes, voltammetry in ionic liquids, the development and interpretation of theories of electron transfer (Butler–Volmer and Marcus–Hush), advances in voltammetric pulse techniques, stochastic random walk models of diffusion, the influence of migration under conditions of low support, voltammetry at rough and porous electrodes, and nanoparticle electrochemistry. The review of the latter field encompasses both the study of nanoparticle-modified electrodes, including stripping voltammetry and the new technique of ‘nano-impacts’. PMID:26246984
From Foucault to Freire Through Facebook: Toward an Integrated Theory of mHealth.
Bull, Sheana; Ezeanochie, Nnamdi
2016-08-01
To document the integration of social science theory in literature on mHealth (mobile health) and consider opportunities for integration of classic theory, health communication theory, and social networking to generate a relevant theory for mHealth program design. A secondary review of research syntheses and meta-analyses published between 2005 and 2014 related to mHealth, using the AMSTAR (A Measurement Tool to Assess Systematic Reviews) methodology for assessment of the quality of each review. High-quality articles from those reviews using a randomized controlled design and integrating social science theory in program design, implementation, or evaluation were reviewed. Results There were 1,749 articles among the 170 reviews with a high AMSTAR score (≥30). Only 13 were published from 2005 to 2014, used a randomized controlled design and made explicit mention of theory in any aspect of their mHealth program. All 13 included theoretical perspectives focused on psychological and/or psychosocial theories and constructs. Conclusions There is a very limited use of social science theory in mHealth despite demonstrated benefits in doing so. We propose an integrated theory of mHealth that incorporates classic theory, health communication theory, and social networking to guide development and evaluation of mHealth programs. © 2015 Society for Public Health Education.
Keller, Lisa A; Clauser, Brian E; Swanson, David B
2010-12-01
In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.
2012-01-01
Background Several methodological issues with non-randomized comparative clinical studies have been raised, one of which is whether the methods used can adequately identify uncertainties that evolve dynamically with time in real-world systems. The objective of this study is to compare the effectiveness of different combinations of Traditional Chinese Medicine (TCM) treatments and combinations of TCM and Western medicine interventions in patients with acute ischemic stroke (AIS) by using Markov decision process (MDP) theory. MDP theory appears to be a promising new method for use in comparative effectiveness research. Methods The electronic health records (EHR) of patients with AIS hospitalized at the 2nd Affiliated Hospital of Guangzhou University of Chinese Medicine between May 2005 and July 2008 were collected. Each record was portioned into two "state-action-reward" stages divided by three time points: the first, third, and last day of hospital stay. We used the well-developed optimality technique in MDP theory with the finite horizon criterion to make the dynamic comparison of different treatment combinations. Results A total of 1504 records with a primary diagnosis of AIS were identified. Only states with more than 10 (including 10) patients' information were included, which gave 960 records to be enrolled in the MDP model. Optimal combinations were obtained for 30 types of patient condition. Conclusion MDP theory makes it possible to dynamically compare the effectiveness of different combinations of treatments. However, the optimal interventions obtained by the MDP theory here require further validation in clinical practice. Further exploratory studies with MDP theory in other areas in which complex interventions are common would be worthwhile. PMID:22400712
NASA Astrophysics Data System (ADS)
Ingo, Carson; Sui, Yi; Chen, Yufen; Parrish, Todd; Webb, Andrew; Ronen, Itamar
2015-03-01
In this paper, we provide a context for the modeling approaches that have been developed to describe non-Gaussian diffusion behavior, which is ubiquitous in diffusion weighted magnetic resonance imaging of water in biological tissue. Subsequently, we focus on the formalism of the continuous time random walk theory to extract properties of subdiffusion and superdiffusion through novel simplifications of the Mittag-Leffler function. For the case of time-fractional subdiffusion, we compute the kurtosis for the Mittag-Leffler function, which provides both a connection and physical context to the much-used approach of diffusional kurtosis imaging. We provide Monte Carlo simulations to illustrate the concepts of anomalous diffusion as stochastic processes of the random walk. Finally, we demonstrate the clinical utility of the Mittag-Leffler function as a model to describe tissue microstructure through estimations of subdiffusion and kurtosis with diffusion MRI measurements in the brain of a chronic ischemic stroke patient.
A new logistic dynamic particle swarm optimization algorithm based on random topology.
Ni, Qingjian; Deng, Jianming
2013-01-01
Population topology of particle swarm optimization (PSO) will directly affect the dissemination of optimal information during the evolutionary process and will have a significant impact on the performance of PSO. Classic static population topologies are usually used in PSO, such as fully connected topology, ring topology, star topology, and square topology. In this paper, the performance of PSO with the proposed random topologies is analyzed, and the relationship between population topology and the performance of PSO is also explored from the perspective of graph theory characteristics in population topologies. Further, in a relatively new PSO variant which named logistic dynamic particle optimization, an extensive simulation study is presented to discuss the effectiveness of the random topology and the design strategies of population topology. Finally, the experimental data are analyzed and discussed. And about the design and use of population topology on PSO, some useful conclusions are proposed which can provide a basis for further discussion and research.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2015-01-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910
Remote sensing of Earth terrain
NASA Technical Reports Server (NTRS)
Kong, Jin AU; Yueh, Herng-Aung
1990-01-01
The layered random medium model is used to investigate the fully polarimetric scattering of electromagnetic waves from vegetation. The vegetation canopy is modeled as an anisotropic random medium containing nonspherical scatterers with preferred alignment. The underlying medium is considered as a homogeneous half space. The scattering effect of the vegetation canopy are characterized by 3-D correlation functions with variances and correlation lengths respectively corresponding to the fluctuation strengths and the physical geometries of the scatterers. The strong fluctuation theory is used to calculate the anisotropic effective permittivity tensor of the random medium and the distorted Born approximation is then applied to obtain the covariance matrix which describes the fully polarimetric scattering properties of the vegetation field. This model accounts for all the interaction processes between the boundaries and the scatterers and includes all the coherent effects due to wave propagation in different directions such as the constructive and destructive interferences. For a vegetation canopy with low attenuation, the boundary between the vegetation and the underlying medium can give rise to significant coherent effects.
A pilot randomized, controlled trial of an active video game physical activity intervention.
Peng, Wei; Pfeiffer, Karin A; Winn, Brian; Lin, Jih-Hsuan; Suton, Darijan
2015-12-01
Active video games (AVGs) transform the sedentary screen time of video gaming into active screen time and have great potential to serve as a "gateway" tool to a more active lifestyle for the least active individuals. This pilot randomized trial was conducted to explore the potential of theory-guided active video games in increasing moderate-to-vigorous physical activity (MVPA) among young adults. In this pilot 4-week intervention, participants were randomly assigned to 1 of the following groups: an AVG group with all the self determination theory (SDT)-based game features turned off, an AVG group with all the SDT-based game features turned on, a passive gameplay group with all the SDT-based game features turned on, and a control group. Physical activity was measured using ActiGraph GT3X accelerometers. Other outcomes included attendance and perceived need satisfaction of autonomy, competence and relatedness. It was found that playing the self-determination theory supported AVG resulted in greater MVPA compared with the control group immediately postintervention. The AVG with the theory-supported features also resulted in greater attendance and psychological need satisfaction than the non-theory-supported one. An AVG designed with motivation theory informed features positively impacted attendance and MVPA immediately postintervention, suggesting that including AVG features guided with motivation theory may be a method of addressing common problems with adherence and increasing effectiveness of active gaming. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing
NASA Technical Reports Server (NTRS)
Takaki, Mitsuo; Cavalcanti, Diego; Gheyi, Rohit; Iyoda, Juliano; dAmorim, Marcelo; Prudencio, Ricardo
2009-01-01
The complexity of constraints is a major obstacle for constraint-based software verification. Automatic constraint solvers are fundamentally incomplete: input constraints often build on some undecidable theory or some theory the solver does not support. This paper proposes and evaluates several randomized solvers to address this issue. We compare the effectiveness of a symbolic solver (CVC3), a random solver, three hybrid solvers (i.e., mix of random and symbolic), and two heuristic search solvers. We evaluate the solvers on two benchmarks: one consisting of manually generated constraints and another generated with a concolic execution of 8 subjects. In addition to fully decidable constraints, the benchmarks include constraints with non-linear integer arithmetic, integer modulo and division, bitwise arithmetic, and floating-point arithmetic. As expected symbolic solving (in particular, CVC3) subsumes the other solvers for the concolic execution of subjects that only generate decidable constraints. For the remaining subjects the solvers are complementary.
Stability and dynamical properties of material flow systems on random networks
NASA Astrophysics Data System (ADS)
Anand, K.; Galla, T.
2009-04-01
The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.
A scaling law for random walks on networks
Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick
2014-01-01
The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics. PMID:25311870
A scaling law for random walks on networks
NASA Astrophysics Data System (ADS)
Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick
2014-10-01
The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.
A scaling law for random walks on networks.
Perkins, Theodore J; Foxall, Eric; Glass, Leon; Edwards, Roderick
2014-10-14
The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.
NASA Astrophysics Data System (ADS)
Müller, Tobias M.; Gurevich, Boris
2005-04-01
An important dissipation mechanism for waves in randomly inhomogeneous poroelastic media is the effect of wave-induced fluid flow. In the framework of Biot's theory of poroelasticity, this mechanism can be understood as scattering from fast into slow compressional waves. To describe this conversion scattering effect in poroelastic random media, the dynamic characteristics of the coherent wavefield using the theory of statistical wave propagation are analyzed. In particular, the method of statistical smoothing is applied to Biot's equations of poroelasticity. Within the accuracy of the first-order statistical smoothing an effective wave number of the coherent field, which accounts for the effect of wave-induced flow, is derived. This wave number is complex and involves an integral over the correlation function of the medium's fluctuations. It is shown that the known one-dimensional (1-D) result can be obtained as a special case of the present 3-D theory. The expression for the effective wave number allows to derive a model for elastic attenuation and dispersion due to wave-induced fluid flow. These wavefield attributes are analyzed in a companion paper. .
Hooker, Leesa; Small, Rhonda; Taft, Angela
2016-03-01
To investigate factors contributing to the sustained domestic violence screening and support practices of Maternal and Child Health nurses 2 years after a randomized controlled trial. Domestic violence screening by healthcare professionals has been implemented in many primary care settings. Barriers to screening exist and screening rates remain low. Evidence for longer term integration of nurse screening is minimal. Trial outcomes showed sustained safety planning behaviours by intervention group nurses. Process evaluation in 2-year follow-up of a cluster randomized controlled trial. Evaluation included a repeat online nurse survey and 14 interviews (July-September 2013). Survey analysis included comparison of proportionate group difference between arms and between trial baseline and 2 year follow-up surveys. Framework analysis was used to assess qualitative data. Normalization Process Theory informed evaluation design and interpretation of results. Survey response was 77% (n = 123/160). Sustainability of nurse identification of domestic violence appeared to be due to greater nurse discussion and domestic violence disclosure by women, facilitated by use of a maternal health and well-being checklist. Over time, intervention group nurses used the maternal checklist more at specific maternal health visits and found the checklist the most helpful resource assisting their domestic violence work. Nurses' spoke of a degree of 'normalization' to domestic violence screening that will need constant investment to maintain. Sustainable domestic violence screening and support outcomes can be achieved in an environment of comprehensive, nurse designed and theory driven implementation. Continuing training, discussion and monitoring of domestic violence work is needed to retain sustainable practices. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Nonlocal torque operators in ab initio theory of the Gilbert damping in random ferromagnetic alloys
NASA Astrophysics Data System (ADS)
Turek, I.; Kudrnovský, J.; Drchal, V.
2015-12-01
We present an ab initio theory of the Gilbert damping in substitutionally disordered ferromagnetic alloys. The theory rests on introduced nonlocal torques which replace traditional local torque operators in the well-known torque-correlation formula and which can be formulated within the atomic-sphere approximation. The formalism is sketched in a simple tight-binding model and worked out in detail in the relativistic tight-binding linear muffin-tin orbital method and the coherent potential approximation (CPA). The resulting nonlocal torques are represented by nonrandom, non-site-diagonal, and spin-independent matrices, which simplifies the configuration averaging. The CPA-vertex corrections play a crucial role for the internal consistency of the theory and for its exact equivalence to other first-principles approaches based on the random local torques. This equivalence is also illustrated by the calculated Gilbert damping parameters for binary NiFe and FeCo random alloys, for pure iron with a model atomic-level disorder, and for stoichiometric FePt alloys with a varying degree of L 10 atomic long-range order.
Early vision and focal attention
NASA Astrophysics Data System (ADS)
Julesz, Bela
1991-07-01
At the thirty-year anniversary of the introduction of the technique of computer-generated random-dot stereograms and random-dot cinematograms into psychology, the impact of the technique on brain research and on the study of artificial intelligence is reviewed. The main finding-that stereoscopic depth perception (stereopsis), motion perception, and preattentive texture discrimination are basically bottom-up processes, which occur without the help of the top-down processes of cognition and semantic memory-greatly simplifies the study of these processes of early vision and permits the linking of human perception with monkey neurophysiology. Particularly interesting are the unexpected findings that stereopsis (assumed to be local) is a global process, while texture discrimination (assumed to be a global process, governed by statistics) is local, based on some conspicuous local features (textons). It is shown that the top-down process of "shape (depth) from shading" does not affect stereopsis, and some of the models of machine vision are evaluated. The asymmetry effect of human texture discrimination is discussed, together with recent nonlinear spatial filter models and a novel extension of the texton theory that can cope with the asymmetry problem. This didactic review attempts to introduce the physicist to the field of psychobiology and its problems-including metascientific problems of brain research, problems of scientific creativity, the state of artificial intelligence research (including connectionist neural networks) aimed at modeling brain activity, and the fundamental role of focal attention in mental events.
Planning in Higher Education and Chaos Theory: A Model, a Method.
ERIC Educational Resources Information Center
Cutright, Marc
This paper proposes a model, based on chaos theory, that explores strategic planning in higher education. It notes that chaos theory was first developed in the physical sciences to explain how apparently random activity was, in fact, complexity patterned. The paper goes on to describe how chaos theory has subsequently been applied to the social…
The Prediction of Item Parameters Based on Classical Test Theory and Latent Trait Theory
ERIC Educational Resources Information Center
Anil, Duygu
2008-01-01
In this study, the prediction power of the item characteristics based on the experts' predictions on conditions try-out practices cannot be applied was examined for item characteristics computed depending on classical test theory and two-parameters logistic model of latent trait theory. The study was carried out on 9914 randomly selected students…
Qualitative Differences between Naive and Scientific Theories of Evolution
ERIC Educational Resources Information Center
Shtulman, Andrew
2006-01-01
Philosophers of biology have long argued that Darwin's theory of evolution was qualitatively different from all earlier theories of evolution. Whereas Darwin's predecessors and contemporaries explained adaptation as the transformation of a species' ''essence,'' Darwin explained adaptation as the selective propagation of randomly occurring…
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Dickinson, R J
1985-04-01
In a recent paper, Vaknine and Lorenz discuss the merits of lateral deconvolution of demodulated B-scans. While this technique will decrease the lateral blurring of single discrete targets, such as the diaphragm in their figure 3, it is inappropriate to apply the method to the echoes arising from inhomogeneous structures such as soft tissue. In this latter case, the echoes from individual scatterers within the resolution cell of the transducer interfere to give random fluctuations in received echo amplitude termed speckle. Although his process can be modeled as a linear convolution similar to that of conventional image formation theory, the process of demodulation is a nonlinear process which loses the all-important phase information, and prevents the subsequent restoration of the image by Wiener filtering, itself a linear process.
Modelling of squall with the generalised kinetic equation
NASA Astrophysics Data System (ADS)
Annenkov, Sergei; Shrira, Victor
2014-05-01
We study the long-term evolution of random wind waves using the new generalised kinetic equation (GKE). The GKE derivation [1] does not assume the quasi-stationarity of a random wave field. In contrast with the Hasselmann kinetic equation, the GKE can describe fast spectral changes occurring when a wave field is driven out of a quasi-equilibrium state by a fast increase or decrease of wind, or by other factors. In these cases, a random wave field evolves on the dynamic timescale typical of coherent wave processes, rather than on the kinetic timescale predicted by the conventional statistical theory. Besides that, the generalised theory allows to trace the evolution of higher statistical moments of the field, notably the kurtosis, which is important for assessing the risk of freak waves and other applications. A new efficient and highly parallelised algorithm for the numerical simulation of the generalised kinetic equation is presented and discussed. Unlike in the case of the Hasselmann equation, the algorithm takes into account all (resonant and non-resonant) nonlinear wave interactions, but only approximately resonant interactions contribute to the spectral evolution. However, counter-intuitively, all interactions contribute to the kurtosis. Without forcing or dissipation, the algorithm is shown to conserve the relevant integrals. We show that under steady wind forcing the wave field evolution predicted by the GKE is close to the predictions of the conventional statistical theory, which is applicable in this case. In particular, we demonstrate the known long-term asymptotics for the evolution of the spectrum. When the wind forcing is not steady (in the simplest case, an instant increase or decrease of wind occurs), the generalised theory is the only way to study the spectral evolution, apart from the direct numerical simulation. The focus of the work is a detailed analysis of the fast evolution after an instant change of forcing, and of the subsequent transition to the new quasi-stationary state of a wave field. It is shown that both increase and decrease of wind lead to a significant transient increase of the dynamic kurtosis, although these changes remain small compared to the changes of the other component of the kurtosis, which is due to bound harmonics. A special consideration is given to the case of the squall, i.e. an instant and large (by a factor of 2-4) increase of wind, which lasts for O(102) characteristic wave periods. We show that fast adjustment processes lead to the formation of a transient spectrum, which has a considerably narrower peak than the spectra developed under a steady forcing. These transient spectra differ qualitatively from those predicted by the Hasselmann kinetic equation under the squall with the same parameters. 1. S.Annenkov, V.Shrira (2006) Role of non-resonant interactions in evolution of nonlinear random water wave fields, J. Fluid Mech. 561, 181-207.
Spatial Distribution of Phase Singularities in Optical Random Vector Waves.
De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L
2016-08-26
Phase singularities are dislocations widely studied in optical fields as well as in other areas of physics. With experiment and theory we show that the vectorial nature of light affects the spatial distribution of phase singularities in random light fields. While in scalar random waves phase singularities exhibit spatial distributions reminiscent of particles in isotropic liquids, in vector fields their distribution for the different vector components becomes anisotropic due to the direct relation between propagation and field direction. By incorporating this relation in the theory for scalar fields by Berry and Dennis [Proc. R. Soc. A 456, 2059 (2000)], we quantitatively describe our experiments.
Stochastic switching in biology: from genotype to phenotype
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.
2017-03-01
There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1-1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker-Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel-Kramers-Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of this review is to provide a self-contained survey of these mathematical methods, mainly within the context of biological switching processes at both the genotypic and phenotypic levels. However, applications to other examples of biological switching are also discussed, including stochastic ion channels, diffusion in randomly switching environments, bacterial chemotaxis, and stochastic neural networks.
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.
Shalymov, Dmitry S; Fradkov, Alexander L
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886
A unified perturbation expansion for surface scattering
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto; Kim, Yunjin
1992-01-01
Starting with the extinction theorem, a perturbation expansion which, to first and second orders, converges over a wider domain than the small perturbation expansion and the momentum transfer expansion is presented. It is shown that, in the appropriate limits, both of these theories, as well as the two-scale expansion, are recovered. There is no adjustable parameter, such as a spectral split, in the theory. This theory is applied to random rough surfaces and derive analytic expressions for the coherent field and the bistatic cross section. Finally, a numerical test of the theory against method of moments results for Gaussian random rough surfaces with a power law spectrum is given. These results show that the expansion is ramarkably accurate over a large range of surface heights and slopes for both horizontal and vertical polarization.
Weak scattering of scalar and electromagnetic random fields
NASA Astrophysics Data System (ADS)
Tong, Zhisong
This dissertation encompasses several studies relating to the theory of weak potential scattering of scalar and electromagnetic random, wide-sense statistically stationary fields from various types of deterministic or random linear media. The proposed theory is largely based on the first Born approximation for potential scattering and on the angular spectrum representation of fields. The main focus of the scalar counterpart of the theory is made on calculation of the second-order statistics of scattered light fields in cases when the scattering medium consists of several types of discrete particles with deterministic or random potentials. It is shown that the knowledge of the correlation properties for the particles of the same and different types, described with the newly introduced pair-scattering matrix, is crucial for determining the spectral and coherence states of the scattered radiation. The approach based on the pair-scattering matrix is then used for solving an inverse problem of determining the location of an "alien" particle within the scattering collection of "normal" particles, from several measurements of the spectral density of scattered light. Weak scalar scattering of light from a particulate medium in the presence of optical turbulence existing between the scattering centers is then approached using the combination of the Born's theory for treating the light interaction with discrete particles and the Rytov's theory for light propagation in extended turbulent medium. It is demonstrated how the statistics of scattered radiation depend on scattering potentials of particles and the power spectra of the refractive index fluctuations of turbulence. This theory is of utmost importance for applications involving atmospheric and oceanic light transmission. The second part of the dissertation includes the theoretical procedure developed for predicting the second-order statistics of the electromagnetic random fields, such as polarization and linear momentum, scattered from static media. The spatial distribution of these properties of scattered fields is shown to be substantially dependent on the correlation and polarization properties of incident fields and on the statistics of the refractive index distribution within the scatterers. Further, an example is considered which illustrates the usefulness of the electromagnetic scattering theory of random fields in the case when the scattering medium is a thin bio-tissue layer with the prescribed power spectrum of the refractive index fluctuations. The polarization state of the scattered light is shown to be influenced by correlation and polarization states of the illumination as well as by the particle size distribution of the tissue slice.
Universality in the dynamical properties of seismic vibrations
NASA Astrophysics Data System (ADS)
Chatterjee, Soumya; Barat, P.; Mukherjee, Indranil
2018-02-01
We have studied the statistical properties of the observed magnitudes of seismic vibration data in discrete time in an attempt to understand the underlying complex dynamical processes. The observed magnitude data are taken from six different geographical locations. All possible magnitudes are considered in the analysis including catastrophic vibrations, foreshocks, aftershocks and commonplace daily vibrations. The probability distribution functions of these data sets obey scaling law and display a certain universality characteristic. To investigate the universality features in the observed data generated by a complex process, we applied Random Matrix Theory (RMT) in the framework of Gaussian Orthogonal Ensemble (GOE). For all these six places the observed data show a close fit with the predictions of RMT. This reinforces the idea of universality in the dynamical processes generating seismic vibrations.
Skinner, Michael K
2015-04-26
Environment has a critical role in the natural selection process for Darwinian evolution. The primary molecular component currently considered for neo-Darwinian evolution involves genetic alterations and random mutations that generate the phenotypic variation required for natural selection to act. The vast majority of environmental factors cannot directly alter DNA sequence. Epigenetic mechanisms directly regulate genetic processes and can be dramatically altered by environmental factors. Therefore, environmental epigenetics provides a molecular mechanism to directly alter phenotypic variation generationally. Lamarck proposed in 1802 the concept that environment can directly alter phenotype in a heritable manner. Environmental epigenetics and epigenetic transgenerational inheritance provide molecular mechanisms for this process. Therefore, environment can on a molecular level influence the phenotypic variation directly. The ability of environmental epigenetics to alter phenotypic and genotypic variation directly can significantly impact natural selection. Neo-Lamarckian concept can facilitate neo-Darwinian evolution. A unified theory of evolution is presented to describe the integration of environmental epigenetic and genetic aspects of evolution. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
NASA Astrophysics Data System (ADS)
Larsson, Caroline; Tibell, Lena A. E.
2015-10-01
A well-ordered biological complex can be formed by the random motion of its components, i.e. self-assemble. This is a concept that incorporates issues that may contradict students' everyday experiences and intuitions. In previous studies, we have shown that a tangible model of virus self-assembly, used in a group exercise, helps students to grasp the process of self-assembly and in particular the facet "random molecular collision". The present study investigates how and why the model and the group exercise facilitate students' learning of this particular facet. The data analysed consist of audio recordings of six group exercises ( n = 35 university students) and individual semi-structured interviews ( n = 5 university students). The analysis is based on constructivist perspectives of learning, a combination of conceptual change theory and learning with external representations. Qualitative analysis indicates that perceived counterintuitive aspects of the process created a cognitive conflict within learners. The tangible model used in the group exercises facilitated a conceptual change in their understanding of the process. In particular, the tangible model appeared to provide cues and possible explanations and functioned as an "eye-opener" and a "thinking tool". Lastly, the results show signs of emotions also being important elements for successful accommodation.
NASA Astrophysics Data System (ADS)
Zheng, Yonghui; Sun, Huayan; Zhao, Yanzhong; Chen, Jianbiao
2015-10-01
Active laser detection technique has a broad application prospect in antimissile and air defense, however the aerodynamic flow field around the planes and missiles cause serious distortion effect on the detecting laser beams. There are many computational fluid dynamics(CFD) codes that can predict the air density distribution and also the density fluctuations of the flow field, it's necessary for physical optics to be used to predict the distortion properties after propagation through the complex process. Aiming at the physical process of laser propagation in "Cat-eye" lenses and aerodynamic flow field for twice, distortion propagation calculation method is researched in this paper. In the minds of dividing the whole process into two parts, and tread the aero-optical optical path difference as a phase distortion, the incidence and reflection process are calculated using Collins formula and angular spectrum diffraction theory respectively. In addition, turbulent performance of the aerodynamic flow field is estimated according to the electromagnetic propagation theory through a random medium, the rms optical path difference and Strehl ratio of the turbulent optical distortion are obtained. Finally, Computational fluid mechanics and aero-optical distortion properties of the detecting laser beams are calculated with the hemisphere-on-cylinder turret as an example, calculation results are showed and analysed.
Button, Melissa L; Norouzian, Nikoo; Westra, Henny A; Constantino, Michael J; Antony, Martin M
2018-01-22
Addressing methodological shortcomings of prior work on process expectations, this study examined client process expectations both prospectively and retrospectively following treatment. Differences between clients receiving cognitive behavioral therapy (CBT) versus motivational interviewing integrated with CBT (MI-CBT) were also examined. Grounded theory analysis was used to study narratives of 10 participants (N = 5 CBT, 5 MI-CBT) who completed treatment for severe generalized anxiety disorder as part of a larger randomized controlled trial. Clients in both groups reported and elaborated expectancy disconfirmations more than expectancy confirmations. Compared to CBT clients, MI-CBT clients reported experiencing greater agency in the treatment process than expected (e.g., that they did most of the work) and that therapy provided a corrective experience. Despite nearly all clients achieving recovery status, CBT clients described therapy as not working in some ways (i.e., tasks did not fit, lack of improvement) and that they overcame initial skepticism regarding treatment. Largely converging with MI theory, findings highlight the role of key therapist behaviors (e.g., encouraging client autonomy, validating) in facilitating client experiences of the self as an agentic individual who is actively engaged in the therapy process and capable of effecting change.
Hoeffding Type Inequalities and their Applications in Statistics and Operations Research
NASA Astrophysics Data System (ADS)
Daras, Tryfon
2007-09-01
Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.
A random walk on water (Henry Darcy Medal Lecture)
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2009-04-01
Randomness and uncertainty had been well appreciated in hydrology and water resources engineering in their initial steps as scientific disciplines. However, this changed through the years and, following other geosciences, hydrology adopted a naïve view of randomness in natural processes. Such a view separates natural phenomena into two mutually exclusive types, random or stochastic, and deterministic. When a classification of a specific process into one of these two types fails, then a separation of the process into two different, usually additive, parts is typically devised, each of which may be further subdivided into subparts (e.g., deterministic subparts such as periodic and aperiodic or trends). This dichotomous logic is typically combined with a manichean perception, in which the deterministic part supposedly represents cause-effect relationships and thus is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). Probability theory and statistics, which traditionally provided the tools for dealing with randomness and uncertainty, have been regarded by some as the "necessary evil" but not as an essential part of hydrology and geophysics. Some took a step further to banish them from hydrology, replacing them with deterministic sensitivity analysis and fuzzy-logic representations. Others attempted to demonstrate that irregular fluctuations observed in natural processes are au fond manifestations of underlying chaotic deterministic dynamics with low dimensionality, thus attempting to render probabilistic descriptions unnecessary. Some of the above recent developments are simply flawed because they make erroneous use of probability and statistics (which, remarkably, provide the tools for such analyses), whereas the entire underlying logic is just a false dichotomy. To see this, it suffices to recall that Pierre Simon Laplace, perhaps the most famous proponent of determinism in the history of philosophy of science (cf. Laplace's demon), is, at the same time, one of the founders of probability theory, which he regarded as "nothing but common sense reduced to calculation". This harmonizes with James Clerk Maxwell's view that "the true logic for this world is the calculus of Probabilities" and was more recently and epigrammatically formulated in the title of Edwin Thompson Jaynes's book "Probability Theory: The Logic of Science" (2003). Abandoning dichotomous logic, either on ontological or epistemic grounds, we can identify randomness or stochasticity with unpredictability. Admitting that (a) uncertainty is an intrinsic property of nature; (b) causality implies dependence of natural processes in time and thus suggests predictability; but, (c) even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon, we may shape a stochastic representation of natural processes that is consistent with Karl Popper's indeterministic world view. In this representation, probability quantifies uncertainty according to the Kolmogorov system, in which probability is a normalized measure, i.e., a function that maps sets (areas where the initial conditions or the parameter values lie) to real numbers (in the interval [0, 1]). In such a representation, predictability (suggested by deterministic laws) and unpredictability (randomness) coexist, are not separable or additive components, and it is a matter of specifying the time horizon of prediction to decide which of the two dominates. An elementary numerical example has been devised to illustrate the above ideas and demonstrate that they offer a pragmatic and useful guide for practice, rather than just pertaining to philosophical discussions. A chaotic model, with fully and a priori known deterministic dynamics and deterministic inputs (without any random agent), is assumed to represent the hydrological balance in an area partly covered by vegetation. Experimentation with this toy model demonstrates, inter alia, that: (1) for short time horizons the deterministic dynamics is able to give good predictions; but (2) these predictions become extremely inaccurate and useless for long time horizons; (3) for such horizons a naïve statistical prediction (average of past data) which fully neglects the deterministic dynamics is more skilful; and (4) if this statistical prediction, in addition to past data, is combined with the probability theory (the principle of maximum entropy, in particular), it can provide a more informative prediction. Also, the toy model shows that the trajectories of the system state (and derivative properties thereof) do not resemble a regular (e.g., periodic) deterministic process nor a purely random process, but exhibit patterns indicating anti-persistence and persistence (where the latter statistically complies with a Hurst-Kolmogorov behaviour). If the process is averaged over long time scales, the anti-persistent behaviour improves predictability, whereas the persistent behaviour substantially deteriorates it. A stochastic representation of this deterministic system, which incorporates dynamics, is not only possible, but also powerful as it provides good predictions for both short and long horizons and helps to decide on when the deterministic dynamics should be considered or neglected. Obviously, a natural system is extremely more complex than this simple toy model and hence unpredictability is naturally even more prominent in the former. In addition, in a complex natural system, we can never know the exact dynamics and we must infer it from past data, which implies additional uncertainty and an additional role of stochastics in the process of formulating the system equations and estimating the involved parameters. Data also offer the only solid grounds to test any hypothesis about the dynamics, and failure of performing such testing against evidence from data renders the hypothesised dynamics worthless. If this perception of natural phenomena is adequately plausible, then it may help in studying interesting fundamental questions regarding the current state and the trends of hydrological and water resources research and their promising future paths. For instance: (i) Will it ever be possible to achieve a fully "physically based" modelling of hydrological systems that will not depend on data or stochastic representations? (ii) To what extent can hydrological uncertainty be reduced and what are the effective means for such reduction? (iii) Are current stochastic methods in hydrology consistent with observed natural behaviours? What paths should we explore for their advancement? (iv) Can deterministic methods provide solid scientific grounds for water resources engineering and management? In particular, can there be risk-free hydraulic engineering and water management? (v) Is the current (particularly important) interface between hydrology and climate satisfactory?. In particular, should hydrology rely on climate models that are not properly validated (i.e., for periods and scales not used in calibration)? In effect, is the evolution of climate and its impacts on water resources deterministically predictable?
What is quantum in quantum randomness?
Grangier, P; Auffèves, A
2018-07-13
It is often said that quantum and classical randomness are of different nature, the former being ontological and the latter epistemological. However, so far the question of 'What is quantum in quantum randomness?', i.e. what is the impact of quantization and discreteness on the nature of randomness, remains to be answered. In a first part, we make explicit the differences between quantum and classical randomness within a recently proposed ontology for quantum mechanics based on contextual objectivity. In this view, quantum randomness is the result of contextuality and quantization. We show that this approach strongly impacts the purposes of quantum theory as well as its areas of application. In particular, it challenges current programmes inspired by classical reductionism, aiming at the emergence of the classical world from a large number of quantum systems. In a second part, we analyse quantum physics and thermodynamics as theories of randomness, unveiling their mutual influences. We finally consider new technological applications of quantum randomness that have opened up in the emerging field of quantum thermodynamics.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).
Symplectic analysis of vertical random vibration for coupled vehicle track systems
NASA Astrophysics Data System (ADS)
Lu, F.; Kennedy, D.; Williams, F. W.; Lin, J. H.
2008-10-01
A computational model for random vibration analysis of vehicle-track systems is proposed and solutions use the pseudo excitation method (PEM) and the symplectic method. The vehicle is modelled as a mass, spring and damping system with 10 degrees of freedom (dofs) which consist of vertical and pitching motion for the vehicle body and its two bogies and vertical motion for the four wheelsets. The track is treated as an infinite Bernoulli-Euler beam connected to sleepers and hence to ballast and is regarded as a periodic structure. Linear springs couple the vehicle and the track. Hence, the coupled vehicle-track system has only 26 dofs. A fixed excitation model is used, i.e. the vehicle does not move along the track but instead the track irregularity profile moves backwards at the vehicle velocity. This irregularity is assumed to be a stationary random process. Random vibration theory is used to obtain the response power spectral densities (PSDs), by using PEM to transform this random multiexcitation problem into a deterministic harmonic excitation one and then applying symplectic solution methodology. Numerical results for an example include verification of the proposed method by comparing with finite element method (FEM) results; comparison between the present model and the traditional rigid track model and; discussion of the influences of track damping and vehicle velocity.
ERIC Educational Resources Information Center
Larwin, Karen H.; Larwin, David A.
2011-01-01
Bootstrapping methods and random distribution methods are increasingly recommended as better approaches for teaching students about statistical inference in introductory-level statistics courses. The authors examined the effect of teaching undergraduate business statistics students using random distribution and bootstrapping simulations. It is the…
Estabrooks, Paul A; Glasgow, Russ E; Xu, Stan; Dzewaltowski, David A; Lee, Rebecca E; Thomas, Deborah; Almeida, Fabio A; Thayer, Amy N; Smith-Ray, Renae L
2011-01-01
OBJECTIVES: Despite the widely acknowledged benefits of regular physical activity (PA), specific goals for increased population levels of PA, and strongly recommended strategies to promote PA, there is no evidence suggesting that the prevalence of PA is improving. If PA intervention research is to be improved, theory should be used as the basis for intervention development, participant context or environment should be considered in the process, and intervention characteristics that will heighten the likelihood of translation into practice should be implemented (e.g., ease of implementation, low human resource costs). The purpose of this paper is to describe the implementation of the aforementioned concepts within the intervention development process associated with CardiACTION an ongoing randomized 2 × 2 factorial trial. METHODS: The Ecological Model of Physical Activity integrated with Protection Motivation Theory was used to inform the design of the interventions. This integrated model was selected to allow for the development of theory-based individual, environmental, and individually + environmentally targeted physical activity interventions. All intervention strategies were matched to proposed mediators of behavior change. Strategies were then matched to the most appropriate interactive technology (i.e., interactive computer session, automated telephone counseling, and tailored mailings) delivery channel. CONCLUSIONS: The potential implications of this study include determining the independent and combined influence of individual and environment mechanisms of behavior change on intervention effectiveness. In addition, all intervention models are developed to be scalable and disseminable to a broad audience at a low cost.
Extended self-similarity in the two-dimensional metal-insulator transition
NASA Astrophysics Data System (ADS)
Moriconi, L.
2003-09-01
We show that extended self-similarity, a scaling phenomenon first observed in classical turbulent flows, holds for a two-dimensional metal-insulator transition that belongs to the universality class of random Dirac fermions. Deviations from multifractality, which in turbulence are due to the dominance of diffusive processes at small scales, appear in the condensed-matter context as a large-scale, finite-size effect related to the imposition of an infrared cutoff in the field theory formulation. We propose a phenomenological interpretation of extended self-similarity in the metal-insulator transition within the framework of the random β-model description of multifractal sets. As a natural step, our discussion is bridged to the analysis of strange attractors, where crossovers between multifractal and nonmultifractal regimes are found and extended self-similarity turns out to be verified as well.
Spectra of random networks in the weak clustering regime
NASA Astrophysics Data System (ADS)
Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen; Rodrigues, Francisco A.
2018-03-01
The asymptotic behavior of dynamical processes in networks can be expressed as a function of spectral properties of the corresponding adjacency and Laplacian matrices. Although many theoretical results are known for the spectra of traditional configuration models, networks generated through these models fail to describe many topological features of real-world networks, in particular non-null values of the clustering coefficient. Here we study effects of cycles of order three (triangles) in network spectra. By using recent advances in random matrix theory, we determine the spectral distribution of the network adjacency matrix as a function of the average number of triangles attached to each node for networks without modular structure and degree-degree correlations. Implications to network dynamics are discussed. Our findings can shed light in the study of how particular kinds of subgraphs influence network dynamics.
How mutation affects evolutionary games on graphs
Allen, Benjamin; Traulsen, Arne; Tarnita, Corina E.; Nowak, Martin A.
2011-01-01
Evolutionary dynamics are affected by population structure, mutation rates and update rules. Spatial or network structure facilitates the clustering of strategies, which represents a mechanism for the evolution of cooperation. Mutation dilutes this effect. Here we analyze how mutation influences evolutionary clustering on graphs. We introduce new mathematical methods to evolutionary game theory, specifically the analysis of coalescing random walks via generating functions. These techniques allow us to derive exact identity-by-descent (IBD) probabilities, which characterize spatial assortment on lattices and Cayley trees. From these IBD probabilities we obtain exact conditions for the evolution of cooperation and other game strategies, showing the dual effects of graph topology and mutation rate. High mutation rates diminish the clustering of cooperators, hindering their evolutionary success. Our model can represent either genetic evolution with mutation, or social imitation processes with random strategy exploration. PMID:21473871
Effects of Visual/Verbal Associations.
ERIC Educational Resources Information Center
Martin, Anna C.
Different effects of instructional strategies on recall and comprehension of terms frequently used in formal analysis of art were examined. The study looked at a synthesis of three theoretical positions: dual-coding theory, schema theory, and elaboration theory. Two-hundred and fifty sixth-grade students were randomly assigned to three groups:…
Brief Instrumental School-Based Mentoring for Middle School Students: Theory and Impact
ERIC Educational Resources Information Center
McQuillin, Samuel D.; Lyons, Michael D.
2016-01-01
This study evaluated the efficacy of an intentionally brief school-based mentoring program. This academic goal-focused mentoring program was developed through a series of iterative randomized controlled trials, and is informed by research in social cognitive theory, cognitive dissonance theory, motivational interviewing, and research in academic…
Modeling the physisorption of graphene on metals
NASA Astrophysics Data System (ADS)
Tao, Jianmin; Tang, Hong; Patra, Abhirup; Bhattarai, Puskar; Perdew, John P.
2018-04-01
Many processes of technological and fundamental importance occur on surfaces. Adsorption is one of these phenomena that has received the most attention. However, it presents a great challenge to conventional density functional theory. Starting with the Lifshitz-Zaremba-Kohn second-order perturbation theory, here we develop a long-range van der Waals (vdW) correction for physisorption of graphene on metals. The model importantly includes quadrupole-surface interaction and screening effects. The results show that, when the vdW correction is combined with the Perdew-Burke-Enzerhof functional, it yields adsorption energies in good agreement with the random-phase approximation, significantly improving upon other vdW methods. We also find that, compared with the leading-order interaction, the higher-order quadrupole-surface correction accounts for about 25 % of the total vdW correction, suggesting the importance of the higher-order term.
Morrison, Deborah; Mair, Frances S; Chaudhuri, Rekha; McGee-Lennon, Marilyn; Thomas, Mike; Thomson, Neil C; Yardley, Lucy; Wyke, Sally
2015-07-28
Around 300 million people worldwide have asthma and prevalence is increasing. Self-management can be effective in improving a range of outcomes and is cost effective, but is underutilised as a treatment strategy. Supporting optimum self-management using digital technology shows promise, but how best to do this is not clear. We aimed to develop an evidence based, theory informed, online resource to support self-management in adults with asthma, called 'Living well with Asthma', as part of the RAISIN (Randomized Trial of an Asthma Internet Self-Management Intervention) study. We developed Living well with Asthma in two phases. Phase 1: A low fidelity prototype (paper-based) version of the website was developed iteratively through input from a multidisciplinary expert panel, empirical evidence from the literature, and potential end users via focus groups (adults with asthma and practice nurses). Implementation and behaviour change theories informed this process. Phase 2: The paper-based designs were converted to a website through an iterative user centred process. Adults with asthma (n = 10) took part in think aloud studies, discussing the paper based version, then the web-based version. Participants considered contents, layout, and navigation. Development was agile using feedback from the think aloud sessions immediately to inform design and subsequent think aloud sessions. Think aloud transcripts were also thematically analysed, further informing resource development. The website asked users to aim to be symptom free. Key behaviours targeted to achieve this include: optimising medication use (including inhaler technique); attending primary care asthma reviews; using asthma action plans; increasing physical activity levels; and stopping smoking. The website had 11 sections, plus email reminders, which promoted these behaviours. Feedback on the contents of the resource was mainly positive with most changes focussing on clarification of language, order of pages and usability issues mainly relating to navigation difficulties. Our multifaceted approach to online intervention development underpinned by theory, using evidence from the literature, co-designed with end users and a multidisciplinary panel has resulted in a resource which end users find relevant to their needs and easy to use. Living well with Asthma is undergoing evaluation within a randomized controlled trial.
Lotka-Volterra system in a random environment.
Dimentberg, Mikhail F
2002-03-01
Classical Lotka-Volterra (LV) model for oscillatory behavior of population sizes of two interacting species (predator-prey or parasite-host pairs) is conservative. This may imply unrealistically high sensitivity of the system's behavior to environmental variations. Thus, a generalized LV model is considered with the equation for preys' reproduction containing the following additional terms: quadratic "damping" term that accounts for interspecies competition, and term with white-noise random variations of the preys' reproduction factor that simulates the environmental variations. An exact solution is obtained for the corresponding Fokker-Planck-Kolmogorov equation for stationary probability densities (PDF's) of the population sizes. It shows that both population sizes are independent gamma-distributed stationary random processes. Increasing level of the environmental variations does not lead to extinction of the populations. However it may lead to an intermittent behavior, whereby one or both population sizes experience very rare and violent short pulses or outbreaks while remaining on a very low level most of the time. This intermittency is described analytically by direct use of the solutions for the PDF's as well as by applying theory of excursions of random functions and by predicting PDF of peaks in the predators' population size.
Lotka-Volterra system in a random environment
NASA Astrophysics Data System (ADS)
Dimentberg, Mikhail F.
2002-03-01
Classical Lotka-Volterra (LV) model for oscillatory behavior of population sizes of two interacting species (predator-prey or parasite-host pairs) is conservative. This may imply unrealistically high sensitivity of the system's behavior to environmental variations. Thus, a generalized LV model is considered with the equation for preys' reproduction containing the following additional terms: quadratic ``damping'' term that accounts for interspecies competition, and term with white-noise random variations of the preys' reproduction factor that simulates the environmental variations. An exact solution is obtained for the corresponding Fokker-Planck-Kolmogorov equation for stationary probability densities (PDF's) of the population sizes. It shows that both population sizes are independent γ-distributed stationary random processes. Increasing level of the environmental variations does not lead to extinction of the populations. However it may lead to an intermittent behavior, whereby one or both population sizes experience very rare and violent short pulses or outbreaks while remaining on a very low level most of the time. This intermittency is described analytically by direct use of the solutions for the PDF's as well as by applying theory of excursions of random functions and by predicting PDF of peaks in the predators' population size.
Random Matrix Theory and Econophysics
NASA Astrophysics Data System (ADS)
Rosenow, Bernd
2000-03-01
Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory Analysis of Diffusion in Stock Price Dynamics, preprint
Brownian Motion and its Conditional Descendants
NASA Astrophysics Data System (ADS)
Garbaczewski, Piotr
It happened before [1] that I have concluded my publication with a special dedication to John R. Klauder. Then, the reason was John's PhD thesis [2] and the questions (perhaps outdated in the eyes of the band-wagon jumpers, albeit still retaining their full vitality [3]): (i) What are the uses of the classical (c-number, non-Grassmann) spinor fields, especially nonlinear ones, what are they for at all ? (ii) What are, if any, the classical partners for Fermi models and fields in particular ? The present dedication, even if not as conspicuously motivated as the previous one by John's research, nevertheless pertains to investigations pursued by John through the years and devoted to the analysis of random noise. Sometimes, re-reading old papers and re-analysing old, frequently forgotten ideas might prove more rewarding than racing the fashions. Following this attitude, let us take as the departure point Schrödinger's original suggestion [4] of the existence of a special class of random processes, which have their origin in the Einstein-Smoluchowski theory of the Brownian motion and its Wiener's codification. The original analysis due to Schrodinger of the probabilistic significance of the heat equation and of its time adjoint in parallel, remained unnoticed by the physics community, and since then forgotten. It reappeared however in the mathematical literature as an inspiration to generalise the concept of Markovian diffusions to the case of Bernstein stochastic processes. But, it stayed without consequences for a deeper understanding of the possible physical phenomena which might underly the corresponding abstract formalism. Schrödinger's objective was to initiate investigations of possible links between quantum theory and the theory of Brownian motion, an attempt which culminated later in the so-called Nelson's stochastic mechanics [8] and its encompassing formalism [7] in which the issue of the Brownian implementation of quantum dynamics is placed in the framework of Markov-Bernstein diffusions…
2009-01-01
Background Cancer-related pain is common and under-treated. This article describes a study designed to test the effectiveness of a theory-driven, patient-centered coaching intervention to improve cancer pain processes and outcomes. Methods/Design The Cancer Health Empowerment for Living without Pain (Ca-HELP) Study is an American Cancer Society sponsored randomized trial conducted in Sacramento, California. A total of 265 cancer patients with at least moderate pain severity (Worst Pain Numerical Analog Score >=4 out of 10) or pain-related impairment (Likert score >= 3 out of 5) were randomly assigned to receive tailored education and coaching (TEC) or educationally-enhanced usual care (EUC); 258 received at least one follow-up assessment. The TEC intervention is based on social-cognitive theory and consists of 6 components (assess, correct, teach, prepare, rehearse, portray). Both interventions were delivered over approximately 30 minutes just prior to a scheduled oncology visit. The majority of visits (56%) were audio-recorded for later communication coding. Follow-up data including outcomes related to pain severity and impairment, self-efficacy for pain control and for patient-physician communication, functional status and well-being, and anxiety were collected at 2, 6, and 12 weeks. Discussion Building on social cognitive theory and pilot work, this study aims to test the hypothesis that a brief, tailored patient activation intervention will promote better cancer pain care and outcomes. Analyses will focus on the effects of the experimental intervention on pain severity and impairment (primary outcomes); self-efficacy and quality of life (secondary outcomes); and relationships among processes and outcomes of cancer pain care. If this model of coaching by lay health educators proves successful, it could potentially be implemented widely at modest cost. Trial Registration [Clinical Trials Identifier: NCT00283166] PMID:19737424
OBSIFRAC: database-supported software for 3D modeling of rock mass fragmentation
NASA Astrophysics Data System (ADS)
Empereur-Mot, Luc; Villemin, Thierry
2003-03-01
Under stress, fractures in rock masses tend to form fully connected networks. The mass can thus be thought of as a 3D series of blocks produced by fragmentation processes. A numerical model has been developed that uses a relational database to describe such a mass. The model, which assumes the fractures to be plane, allows data from natural networks to test theories concerning fragmentation processes. In the model, blocks are bordered by faces that are composed of edges and vertices. A fracture can originate from a seed point, its orientation being controlled by the stress field specified by an orientation matrix. Alternatively, it can be generated from a discrete set of given orientations and positions. Both kinds of fracture can occur together in a model. From an original simple block, a given fracture produces two simple polyhedral blocks, and the original block becomes compound. Compound and simple blocks created throughout fragmentation are stored in the database. Several fragmentation processes have been studied. In one scenario, a constant proportion of blocks is fragmented at each step of the process. The resulting distribution appears to be fractal, although seed points are random in each fragmented block. In a second scenario, division affects only one random block at each stage of the process, and gives a Weibull volume distribution law. This software can be used for a large number of other applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, Marcel
2015-06-15
We consider the statistics of time delay in a chaotic cavity having M open channels, in the absence of time-reversal invariance. In the random matrix theory approach, we compute the average value of polynomial functions of the time delay matrix Q = − iħS{sup †}dS/dE, where S is the scattering matrix. Our results do not assume M to be large. In a companion paper, we develop a semiclassical approximation to S-matrix correlation functions, from which the statistics of Q can also be derived. Together, these papers contribute to establishing the conjectured equivalence between the random matrix and the semiclassical approaches.
Exact Markov chains versus diffusion theory for haploid random mating.
Tyvand, Peder A; Thorvaldsen, Steinar
2010-05-01
Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.
Summing Feynman graphs by Monte Carlo: Planar ϕ3-theory and dynamically triangulated random surfaces
NASA Astrophysics Data System (ADS)
Boulatov, D. V.; Kazakov, V. A.
1988-12-01
New combinatorial identities are suggested relating the ratio of (n - 1)th and nth orders of (planar) perturbation expansion for any quantity to some average over the ensemble of all planar graphs of the nth order. These identities are used for Monte Carlo calculation of critical exponents γstr (string susceptibility) in planar ϕ3-theory and in the dynamically triangulated random surface (DTRS) model near the convergence circle for various dimensions. In the solvable case D = 1 the exact critical properties of the theory are reproduced numerically. After August 3, 1988 the address will be: Cybernetics Council, Academy of Science, ul. Vavilova 40, 117333 Moscow, USSR.
Random Matrix Theory Approach to Chaotic Coherent Perfect Absorbers
NASA Astrophysics Data System (ADS)
Li, Huanan; Suwunnarat, Suwun; Fleischmann, Ragnar; Schanz, Holger; Kottos, Tsampikos
2017-01-01
We employ random matrix theory in order to investigate coherent perfect absorption (CPA) in lossy systems with complex internal dynamics. The loss strength γCPA and energy ECPA, for which a CPA occurs, are expressed in terms of the eigenmodes of the isolated cavity—thus carrying over the information about the chaotic nature of the target—and their coupling to a finite number of scattering channels. Our results are tested against numerical calculations using complex networks of resonators and chaotic graphs as CPA cavities.
Role of small-norm components in extended random-phase approximation
NASA Astrophysics Data System (ADS)
Tohyama, Mitsuru
2017-09-01
The role of the small-norm amplitudes in extended random-phase approximation (RPA) theories such as the particle-particle and hole-hole components of one-body amplitudes and the two-body amplitudes other than two-particle/two-hole components are investigated for the one-dimensional Hubbard model using an extended RPA derived from the time-dependent density matrix theory. It is found that these amplitudes cannot be neglected in strongly interacting regions where the effects of ground-state correlations are significant.
Sadeh, Sadra; Rotter, Stefan
2014-01-01
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704
Synchronization invariance under network structural transformations
NASA Astrophysics Data System (ADS)
Arola-Fernández, Lluís; Díaz-Guilera, Albert; Arenas, Alex
2018-06-01
Synchronization processes are ubiquitous despite the many connectivity patterns that complex systems can show. Usually, the emergence of synchrony is a macroscopic observable; however, the microscopic details of the system, as, e.g., the underlying network of interactions, is many times partially or totally unknown. We already know that different interaction structures can give rise to a common functionality, understood as a common macroscopic observable. Building upon this fact, here we propose network transformations that keep the collective behavior of a large system of Kuramoto oscillators invariant. We derive a method based on information theory principles, that allows us to adjust the weights of the structural interactions to map random homogeneous in-degree networks into random heterogeneous networks and vice versa, keeping synchronization values invariant. The results of the proposed transformations reveal an interesting principle; heterogeneous networks can be mapped to homogeneous ones with local information, but the reverse process needs to exploit higher-order information. The formalism provides analytical insight to tackle real complex scenarios when dealing with uncertainty in the measurements of the underlying connectivity structure.
NASA Astrophysics Data System (ADS)
Mousavi Nezhad, Mohaddeseh; Fisher, Quentin J.; Gironacci, Elia; Rezania, Mohammad
2018-06-01
Reliable prediction of fracture process in shale-gas rocks remains one of the most significant challenges for establishing sustained economic oil and gas production. This paper presents a modeling framework for simulation of crack propagation in heterogeneous shale rocks. The framework is on the basis of a variational approach, consistent with Griffith's theory. The modeling framework is used to reproduce the fracture propagation process in shale rock samples under standard Brazilian disk test conditions. Data collected from the experiments are employed to determine the testing specimens' tensile strength and fracture toughness. To incorporate the effects of shale formation heterogeneity in the simulation of crack paths, fracture properties of the specimens are defined as spatially random fields. A computational strategy on the basis of stochastic finite element theory is developed that allows to incorporate the effects of heterogeneity of shale rocks on the fracture evolution. A parametric study has been carried out to better understand how anisotropy and heterogeneity of the mechanical properties affect both direction of cracks and rock strength.
Mediating processes of two communication interventions for breast cancer patients
Hawkins, Robert P.; Pingree, Suzanne; Shaw, Bret; Serlin, Ronald C.; Swoboda, Chris; Han, Jeong-Yeob; Carmack, Cindy L.; Salner, Andrew
2012-01-01
Objective Test whether three mediating processes of Self-Determination Theory are involved in intervention effects on quality of life for breast cancer patients. Methods A randomized clinical trial recruited newly diagnosed breast cancer patients for 6 months of (1) Internet training and access, (2) access to an integrated eHealth system for breast cancer (CHESS), (3) a series of phone conversations with a Human Cancer Information Mentor, or (4) both (2) and (3). Results This paper reports results after the initial 6 weeks of intervention, at which point patients in the combined condition had higher quality of life scores than those in the other three conditions. All three Self-Determination Theory constructs (autonomy, competence, and relatedness) mediated that effect as hypothesized. In addition, the single-intervention groups were superior to the Internet-only group on relatedness, though perhaps this was too soon for that to carry through to quality of life as well. Conclusions The SDT constructs do mediate these interventions’ effects. Practice implications Intervention design can profitably focus on enhancing autonomy, competence and relatedness. PMID:21081261
Implementation of a Cross-Layer Sensing Medium-Access Control Scheme.
Su, Yishan; Fu, Xiaomei; Han, Guangyao; Xu, Naishen; Jin, Zhigang
2017-04-10
In this paper, compressed sensing (CS) theory is utilized in a medium-access control (MAC) scheme for wireless sensor networks (WSNs). We propose a new, cross-layer compressed sensing medium-access control (CL CS-MAC) scheme, combining the physical layer and data link layer, where the wireless transmission in physical layer is considered as a compress process of requested packets in a data link layer according to compressed sensing (CS) theory. We first introduced using compressive complex requests to identify the exact active sensor nodes, which makes the scheme more efficient. Moreover, because the reconstruction process is executed in a complex field of a physical layer, where no bit and frame synchronizations are needed, the asynchronous and random requests scheme can be implemented without synchronization payload. We set up a testbed based on software-defined radio (SDR) to implement the proposed CL CS-MAC scheme practically and to demonstrate the validation. For large-scale WSNs, the simulation results show that the proposed CL CS-MAC scheme provides higher throughput and robustness than the carrier sense multiple access (CSMA) and compressed sensing medium-access control (CS-MAC) schemes.
A random walk rule for phase I clinical trials.
Durham, S D; Flournoy, N; Rosenberger, W F
1997-06-01
We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.
Magnetic field line random walk in models and simulations of reduced magnetohydrodynamic turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snodin, A. P.; Ruffolo, D.; Oughton, S.
2013-12-10
The random walk of magnetic field lines is examined numerically and analytically in the context of reduced magnetohydrodynamic (RMHD) turbulence, which provides a useful description of plasmas dominated by a strong mean field, such as in the solar corona. A recently developed non-perturbative theory of magnetic field line diffusion is compared with the diffusion coefficients obtained by accurate numerical tracing of magnetic field lines for both synthetic models and direct numerical simulations of RMHD. Statistical analysis of an ensemble of trajectories confirms the applicability of the theory, which very closely matches the numerical field line diffusion coefficient as a functionmore » of distance z along the mean magnetic field for a wide range of the Kubo number R. This theory employs Corrsin's independence hypothesis, sometimes thought to be valid only at low R. However, the results demonstrate that it works well up to R = 10, both for a synthetic RMHD model and an RMHD simulation. The numerical results from the RMHD simulation are compared with and without phase randomization, demonstrating a clear effect of coherent structures on the field line random walk for a very low Kubo number.« less
NASA Technical Reports Server (NTRS)
Plotkin, Kenneth J.; Maglieri, Domenic J.; Sullivan, Brenda M.
2005-01-01
Turbulence has two distinctive effects on sonic booms: there is distortion in the form of random perturbations that appear behind the shock waves, and shock rise times are increased randomly. A first scattering theory by S.C. Crow in the late 1960s quantified the random distortions, and Crow's theory was shown to agree with available flight test data. A variety of theories for the shock thickness have been presented, all supporting the role of turbulence in increasing rise time above that of a basic molecular-relaxation structure. The net effect of these phenomena on the loudness of shaped minimized booms is of significant interest. Initial analysis suggests that there would be no change to average loudness, but this had not been experimentally investigated. The January 2004 flight test of the Shaped Sonic Boom Demonstrator (SSBD), together with a reference unmodified F-5E, included a 12500- foot linear ground sensor array with 28 digitally recorded sensor sites. This data set provides an opportunity to re-test Crow's theory for the post-shock perturbations, and to examine the net effect of turbulence on the loudness of shaped sonic booms.
Use of Theory in Behavior Change Interventions.
Bluethmann, Shirley M; Bartholomew, L Kay; Murphy, Caitlin C; Vernon, Sally W
2017-04-01
Theory use may enhance effectiveness of behavioral interventions, yet critics question whether theory-based interventions have been sufficiently scrutinized. This study applied a framework to evaluate theory use in physical activity interventions for breast cancer survivors. The aims were to (1) evaluate theory application intensity and (2) assess the association between extensiveness of theory use and intervention effectiveness. Studies were previously identified through a systematic search, including only randomized controlled trials published from 2005 to 2013, that addressed physical activity behavior change and studied survivors who were <5 years posttreatment. Eight theory items from Michie and Prestwich's coding framework were selected to calculate theory intensity scores. Studies were classified into three subgroups based on extensiveness of theory use (Level 1 = sparse; Level 2 = moderate; and Level 3 = extensive). Fourteen randomized controlled trials met search criteria. Most trials used the transtheoretical model ( n = 5) or social cognitive theory ( n = 3). For extensiveness of theory use, 5 studies were classified as Level 1, 4 as Level 2, and 5 as Level 3. Studies in the extensive group (Level 3) had the largest overall effect size ( g = 0.76). Effects were more modest in Level 1 and 2 groups with overall effect sizes of g = 0.28 and g = 0.36, respectively. Theory use is often viewed as essential to behavior change, but theory application varies widely. In this study, there was some evidence to suggest that extensiveness of theory use enhanced intervention effectiveness. However, there is more to learn about how theory can improve interventions for breast cancer survivors.
Kay, Aaron C.; Inzlicht, Michael
2015-01-01
Several prominent theories spanning clinical, social and developmental psychology suggest that people are motivated to see the world as a sensible orderly place. These theories presuppose that randomness is aversive because it is associated with unpredictability. If this is the case, thinking that the world is random should lead to increased anxiety and heightened monitoring of one’s actions and their consequences. Here, we conduct experimental tests of both of these ideas. Participants read one of three passages: (i) comprehensible order, (ii) incomprehensible order and (iii) randomness. In Study 1, we examined the effects of these passages on self-reported anxiety. In Study 2, we examined the effects of the same manipulation on the error-related negativity (ERN), an event-related brain potential associated with performance monitoring. We found that messages about randomness increased self-reported anxiety and ERN amplitude relative to comprehensible order, whereas incomprehensible order had intermediate effects. These results lend support to the theoretically important idea that randomness is unsettling because it implies that the world is unpredictable. PMID:25062840
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
Applying Chaos Theory to Lesson Planning and Delivery
ERIC Educational Resources Information Center
Cvetek, Slavko
2008-01-01
In this article, some of the ways in which thinking about chaos theory can help teachers and student-teachers to accept uncertainty and randomness as natural conditions in the classroom are considered. Building on some key features of complex systems commonly attributed to chaos theory (e.g. complexity, nonlinearity, sensitivity to initial…
From quantum to classical interactions between a free electron and a surface
NASA Astrophysics Data System (ADS)
Beierle, Peter James
Quantum theory is often cited as being one of the most empirically validated theories in terms of its predictive power and precision. These attributes have led to numerous scientific discoveries and technological advancements. However, the precise relationship between quantum and classical physics remains obscure. The prevailing description is known as decoherence theory, where classical physics emerges from a more general quantum theory through environmental interaction. Sometimes referred to as the decoherence program, it does not solve the quantum measurement problem. We believe experiments performed between the microscopic and macroscopic world may help finish the program. The following considers a free electron that interacts with a surface (the environment), providing a controlled decoherence mechanism. There are non-decohering interactions to be examined and quantified before the weaker decohering effects are filtered out. In the first experiment, an electron beam passes over a surface that's illuminated by low-power laser light. This induces a surface charge redistribution causing the electron deflection. This phenomenon's parameters are investigated. This system can be well understood in terms of classical electrodynamics, and the technological applications of this electron beam switch are considered. Such phenomena may mask decoherence effects. A second experiment tests decoherence theory by introducing a nanofabricated diffraction grating before the surface. The electron undergoes diffraction through the grating, but as the electron passes over the surface it's predicted by various physical models that the electron will lose its wave interference property. Image charge based models, which predict a larger loss of contrast than what is observed, are falsified (despite experiencing an image charge force). A theoretical study demonstrates how a loss of contrast may not be due to the irreversible process decoherence, but dephasing (a reversible process due to randomization of the wavefunction's phase). To resolve this ambiguity, a correlation function on an ensemble of diffraction patterns is analyzed after an electron undergoes either process in a path integral calculation. The diffraction pattern is successfully recovered for dephasing, but not for decoherence, thus verifying it as a potential tool in experimental studies to determine the nature of the observed process.
Evaluation of programs to improve complementary feeding in infants and young children.
Frongillo, Edward A
2017-10-01
Evaluation of complementary feeding programs is needed to enhance knowledge on what works, to document responsible use of resources, and for advocacy. Evaluation is done during program conceptualization and design, implementation, and determination of effectiveness. This paper explains the role of evaluation in the advancement of complementary feeding programs, presenting concepts and methods and illustrating them through examples. Planning and investments for evaluations should occur from the beginning of the project life cycle. Essential to evaluation is articulation of a program theory on how change would occur and what program actions are required for change. Analysis of program impact pathways makes explicit the dynamic connections in the program theory and accounts for contextual factors that could influence program effectiveness. Evaluating implementation functioning is done through addressing questions about needs, coverage, provision, and utilization using information obtained from process evaluation, operations research, and monitoring. Evaluating effectiveness is done through assessing impact, efficiency, coverage, process, and causality. Plausibility designs ask whether the program seemed to have an effect above and beyond external influences, often using a nonrandomized control group and baseline and end line measures. Probability designs ask whether there was an effect using a randomized control group. Evaluations may not be able to use randomization, particularly for programs implemented at a large scale. Plausibility designs, innovative designs, or innovative combinations of designs sometimes are best able to provide useful information. Further work is needed to develop practical designs for evaluation of large-scale country programs on complementary feeding. © 2017 John Wiley & Sons Ltd.
Neural Correlates of Sex/Gender Differences in Humor Processing for Different Joke Types.
Chan, Yu-Chen
2016-01-01
Humor operates through a variety of techniques, which first generate surprise and then amusement and laughter once the unexpected incongruity is resolved. As different types of jokes use different techniques, the corresponding humor processes also differ. The present study builds on the framework of the 'tri-component theory of humor,' which details the mechanisms involved in cognition (comprehension), affect (appreciation), and laughter (expression). This study seeks to identify differences among joke types and between sexes/genders in the neural mechanisms underlying humor processing. Three types of verbal jokes, bridging-inference jokes (BJs), exaggeration jokes (EJs), and ambiguity jokes (AJs), were used as stimuli. The findings revealed differences in brain activity for an interaction between sex/gender and joke type. For BJs, women displayed greater activation in the temporoparietal-mesocortical-motor network than men, demonstrating the importance of the temporoparietal junction (TPJ) presumably for 'theory of mind' processing, the orbitofrontal cortex for motivational functions and reward coding, and the supplementary motor area for laughter. Women also showed greater activation than men in the frontal-mesolimbic network associated with EJs, including the anterior (frontopolar) prefrontal cortex (aPFC, BA 10) for executive control processes, and the amygdala and midbrain for reward anticipation and salience processes. Conversely, AJs elicited greater activation in men than women in the frontal-paralimbic network, including the dorsal prefrontal cortex (dPFC) and parahippocampal gyrus. All joke types elicited greater activation in the aPFC of women than of men, whereas men showed greater activation than women in the dPFC. To confirm the findings related to sex/gender differences, random group analysis and within group variance analysis were also performed. These findings help further establish the mechanisms underlying the processing of different joke types for the sexes/genders and provide a neural foundation for a theory of sex/gender differences in humor.
Neural Correlates of Sex/Gender Differences in Humor Processing for Different Joke Types
Chan, Yu-Chen
2016-01-01
Humor operates through a variety of techniques, which first generate surprise and then amusement and laughter once the unexpected incongruity is resolved. As different types of jokes use different techniques, the corresponding humor processes also differ. The present study builds on the framework of the ‘tri-component theory of humor,’ which details the mechanisms involved in cognition (comprehension), affect (appreciation), and laughter (expression). This study seeks to identify differences among joke types and between sexes/genders in the neural mechanisms underlying humor processing. Three types of verbal jokes, bridging-inference jokes (BJs), exaggeration jokes (EJs), and ambiguity jokes (AJs), were used as stimuli. The findings revealed differences in brain activity for an interaction between sex/gender and joke type. For BJs, women displayed greater activation in the temporoparietal–mesocortical-motor network than men, demonstrating the importance of the temporoparietal junction (TPJ) presumably for ‘theory of mind’ processing, the orbitofrontal cortex for motivational functions and reward coding, and the supplementary motor area for laughter. Women also showed greater activation than men in the frontal-mesolimbic network associated with EJs, including the anterior (frontopolar) prefrontal cortex (aPFC, BA 10) for executive control processes, and the amygdala and midbrain for reward anticipation and salience processes. Conversely, AJs elicited greater activation in men than women in the frontal-paralimbic network, including the dorsal prefrontal cortex (dPFC) and parahippocampal gyrus. All joke types elicited greater activation in the aPFC of women than of men, whereas men showed greater activation than women in the dPFC. To confirm the findings related to sex/gender differences, random group analysis and within group variance analysis were also performed. These findings help further establish the mechanisms underlying the processing of different joke types for the sexes/genders and provide a neural foundation for a theory of sex/gender differences in humor. PMID:27199791
Dimensional study of the dynamical arrest in a random Lorentz gas.
Jin, Yuliang; Charbonneau, Patrick
2015-04-01
The random Lorentz gas (RLG) is a minimal model for transport in heterogeneous media. Upon increasing the obstacle density, it exhibits a growing subdiffusive transport regime and then a dynamical arrest. Here, we study the dimensional dependence of the dynamical arrest, which can be mapped onto the void percolation transition for Poisson-distributed point obstacles. We numerically determine the arrest in dimensions d=2-6. Comparison of the results with standard mode-coupling theory reveals that the dynamical theory prediction grows increasingly worse with d. In an effort to clarify the origin of this discrepancy, we relate the dynamical arrest in the RLG to the dynamic glass transition of the infinite-range Mari-Kurchan-model glass former. Through a mixed static and dynamical analysis, we then extract an improved dimensional scaling form as well as a geometrical upper bound for the arrest. The results suggest that understanding the asymptotic behavior of the random Lorentz gas may be key to surmounting fundamental difficulties with the mode-coupling theory of glasses.
NASA Astrophysics Data System (ADS)
Radgolchin, Moeen; Moeenfard, Hamid
2018-02-01
The construction of self-powered micro-electro-mechanical units by converting the mechanical energy of the systems into electrical power has attracted much attention in recent years. While power harvesting from deterministic external excitations is state of the art, it has been much more difficult to derive mathematical models for scavenging electrical energy from ambient random vibrations, due to the stochastic nature of the excitations. The current research concerns analytical modeling of micro-bridge energy harvesters based on random vibration theory. Since classical elasticity fails to accurately predict the mechanical behavior of micro-structures, strain gradient theory is employed as a powerful tool to increase the accuracy of the random vibration modeling of the micro-harvester. Equations of motion of the system in the time domain are derived using the Lagrange approach. These are then utilized to determine the frequency and impulse responses of the structure. Assuming the energy harvester to be subjected to a combination of broadband and limited-band random support motion and transverse loading, closed-form expressions for mean, mean square, correlation and spectral density of the output power are derived. The suggested formulation is further exploited to investigate the effect of the different design parameters, including the geometric properties of the structure as well as the properties of the electrical circuit on the resulting power. Furthermore, the effect of length scale parameters on the harvested energy is investigated in detail. It is observed that the predictions of classical and even simple size-dependent theories (such as couple stress) appreciably differ from the findings of strain gradient theory on the basis of random vibration. This study presents a first-time modeling of micro-scale harvesters under stochastic excitations using a size-dependent approach and can be considered as a reliable foundation for future research in the field of micro/nano harvesters subjected to non-deterministic loads.
Russo, Lucia; Russo, Paola; Siettos, Constantinos I.
2016-01-01
Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire. PMID:27780249
NASA Astrophysics Data System (ADS)
2011-07-01
WE RECOMMEND Fun Fly Stick Science Kit Fun fly stick introduces electrostatics to youngsters Special Relativity Text makes a useful addition to the study of relativity as an undergraduate LabVIEWTM 2009 Education Edition LabVIEW sets industry standard for gathering and analysing data, signal processing, instrumentation design and control, and automation and robotics Edison and Ford Winter Estates Thomas Edison's home is open to the public The Computer History Museum Take a walk through technology history at this computer museum WORTH A LOOK Fast Car Physics Book races through physics Beautiful Invisible The main subject of this book is theoretical physics Quantum Theory Cannot Hurt You A guide to physics on the large and small scale Chaos: The Science of Predictable Random Motion Book explores the mathematics behind chaotic behaviour Seven Wonders of the Universe A textual trip through the wonderful universe HANDLE WITH CARE Marie Curie: A Biography Book fails to capture Curie's science WEB WATCH Web clips to liven up science lessons
Russo, Lucia; Russo, Paola; Siettos, Constantinos I
2016-01-01
Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire.
A novel method about detecting missing holes on the motor carling
NASA Astrophysics Data System (ADS)
Xu, Hongsheng; Tan, Hao; Li, Guirong
2018-03-01
After a deep analysis on how to use an image processing system to detect the missing holes on the motor carling, we design the whole system combined with the actual production conditions of the motor carling. Afterwards we explain the whole system's hardware and software in detail. We introduce the general functions for the system's hardware and software. Analyzed these general functions, we discuss the modules of the system's hardware and software and the theory to design these modules in detail. The measurement to confirm the area to image processing, edge detection, randomized Hough transform to circle detecting is explained in detail. Finally, the system result tested in the laboratory and in the factory is given out.
Telegraph noise in Markovian master equation for electron transport through molecular junctions
NASA Astrophysics Data System (ADS)
Kosov, Daniel S.
2018-05-01
We present a theoretical approach to solve the Markovian master equation for quantum transport with stochastic telegraph noise. Considering probabilities as functionals of a random telegraph process, we use Novikov's functional method to convert the stochastic master equation to a set of deterministic differential equations. The equations are then solved in the Laplace space, and the expression for the probability vector averaged over the ensemble of realisations of the stochastic process is obtained. We apply the theory to study the manifestations of telegraph noise in the transport properties of molecular junctions. We consider the quantum electron transport in a resonant-level molecule as well as polaronic regime transport in a molecular junction with electron-vibration interaction.
Replication of Cancellation Orders Using First-Passage Time Theory in Foreign Currency Market
NASA Astrophysics Data System (ADS)
Boilard, Jean-François; Kanazawa, Kiyoshi; Takayasu, Hideki; Takayasu, Misako
Our research focuses on the annihilation dynamics of limit orders in a spot foreign currency market for various currency pairs. We analyze the cancellation order distribution conditioned on the normalized distance from the mid-price; where the normalized distance is defined as the final distance divided by the initial distance. To reproduce real data, we introduce two simple models that assume the market price moves randomly and cancellation occurs either after fixed time t or following the Poisson process. Results of our model qualitatively reproduce basic statistical properties of cancellation orders of the data when limit orders are cancelled according to the Poisson process. We briefly discuss implication of our findings in the construction of more detailed microscopic models.
Applications of Derandomization Theory in Coding
NASA Astrophysics Data System (ADS)
Cheraghchi, Mahdi
2011-07-01
Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2016-01-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell- Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of the first principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies.
Mishchenko, Michael I; Dlugach, Janna M; Yurkin, Maxim A; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R Lee; Travis, Larry D; Yang, Ping; Zakharova, Nadezhda T
2016-05-16
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ , or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell-Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of the first-principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies.
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2018-01-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development of the first-principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies. PMID:29657355
Some Minorants and Majorants of Random Walks and Levy Processes
NASA Astrophysics Data System (ADS)
Abramson, Joshua Simon
This thesis consists of four chapters, all relating to some sort of minorant or majorant of random walks or Levy processes. In Chapter 1 we provide an overview of recent work on descriptions and properties of the convex minorant of random walks and Levy processes as detailed in Chapter 2, [72] and [73]. This work rejuvenated the field of minorants, and led to the work in all the subsequent chapters. The results surveyed include point process descriptions of the convex minorant of random walks and Levy processes on a fixed finite interval, up to an independent exponential time, and in the infinite horizon case. These descriptions follow from the invariance of these processes under an adequate path transformation. In the case of Brownian motion, we note how further special properties of this process, including time-inversion, imply a sequential description for the convex minorant of the Brownian meander. This chapter is based on [3], which was co-written with Jim Pitman, Nathan Ross and Geronimo Uribe Bravo. Chapter 1 serves as a long introduction to Chapter 2, in which we offer a unified approach to the theory of concave majorants of random walks. The reasons for the switch from convex minorants to concave majorants are discussed in Section 1.1, but the results are all equivalent. This unified theory is arrived at by providing a path transformation for a walk of finite length that leaves the law of the walk unchanged whilst providing complete information about the concave majorant - the path transformation is different from the one discussed in Chapter 1, but this is necessary to deal with a more general case than the standard one as done in Section 2.6. The path transformation of Chapter 1, which is discussed in detail in Section 2.8, is more relevant to the limiting results for Levy processes that are of interest in Chapter 1. Our results lead to a description of a walk of random geometric length as a Poisson point process of excursions away from its concave majorant, which is then used to find a complete description of the concave majorant of a walk of infinite length. In the case where subsets of increments may have the same arithmetic mean (the more general case mentioned above), we investigate three nested compositions that naturally arise from our construction of the concave majorant. This chapter is based on [4], which was co-written with Jim Pitman. In Chapter 3, we study the Lipschitz minorant of a Levy process. For alpha > 0, the alpha-Lipschitz minorant of a function f : R→R is the greatest function m : R→R such that m ≤ f and | m(s) - m(t)| ≤ alpha |s - t| for all s, t ∈ R should such a function exist. If X = Xtt∈ R is a real-valued Levy process that is not pure linear drift with slope +/-alpha, then the sample paths of X have an alpha-Lipschitz minorant almost surely if and only if | E [X1]| < alpha. Denoting the minorant by M, we investigate properties of the random closed set Z := {t ∈ R : Mt = {Xt ∧ Xt-}, which, since it is regenerative and stationary, has the distribution of the closed range of some subordinator "made stationary" in a suitable sense. We give conditions for the contact set Z to be countable or to have zero Lebesgue measure, and we obtain formulas that characterize the Levy measure of the associated subordinator. We study the limit of Z as alpha → infinity and find for the so-called abrupt Levy processes introduced by Vigon that this limit is the set of local infima of X. When X is a Brownian motion with drift beta such that |beta| < alpha, we calculate explicitly the densities of various random variables related to the minorant. This chapter is based on [2], which was co-written with Steven N. Evans. Finally, in Chapter 4 we study the structure of the shocks for the inviscid Burgers equation in dimension 1 when the initial velocity is given by Levy noise, or equivalently when the initial potential is a two-sided Levy process This shock structure turns out to give rise to a parabolic minorant of the Levy process--see Section 4.2 for details. The main results are that when psi0 is abrupt in the sense of Vigon or has bounded variation with limsuph-2 h↓0y0 h=infinity , the set of points with zero velocity is regenerative, and that in the latter case this set is equal to the set of Lagrangian regular points, which is non-empty. When psi0 is abrupt the shock structure is discrete and when psi0 is eroded there are no rarefaction intervals. This chapter is based on [1].
Effects of Person- and Process-Focused Feedback on Prosocial Behavior in Middle Childhood
Dunsmore, Julie C.
2014-01-01
Effects of person- and process-focused feedback, parental lay theories, and prosocial self-concept on children’s prosocial behavior were investigated with 143 9- and 10-year-old children who participated in a single session. Parents reported entity (person-focused) and incremental (process-focused) beliefs related to prosocial behavior. Children completed measures of prosocial self-concept, then participated in a virtual online chat with child actors who asked for help with service projects. After completing the chat, children could assist with the service projects. In the first cohort, children were randomly assigned to receive person-focused, process-focused, or control feedback about sympathy. In the second cohort, with newly-recruited families, children received no feedback. When given process-focused feedback, children spent less time spent helping and worked on fewer service projects. When given no feedback, children spent less time helping when parents held incremental (process-focused) beliefs. Children with higher prosocial self-concept who received no feedback worked on more service projects. PMID:25684859
Barber, T X; Wilson, S C
1977-10-07
Sixty-six subjects were tested on a new scale for evaluating "hypnotic-like" experiences (The Creative Imagination Scale), which includes ten standardized test-suggestions (e.g. suggestions for arm heaviness, finger anesthesia, time distortion, and age regression). The subjects were randomly assigned to one of three treatment groups (Think-With Instructions, trance induction, and Control), with 22 subjects to each group. The new Cognitive-Behavioral Theory predicted that subjects exposed to preliminary instructions designed to demonstrate how to think and imagine along with the suggested themes (Think-With Instructions) would be more responsive to test-suggestions for anesthesia, time distortion, age regression, and so on, than subjects exposed to a trance-induction procedure. On the other hand, the traditional Trance State Theory predicted that a trance induction would be more effective than Think-With Instructions in enhancing responses to such suggestions. Subjects exposed to the Think-With Instructions obtained significantly higher scores on the test-suggestions than those exposed either to the traditional trance-induction procedure or to the control treatment. Scores of subjects who received the trance-induction procedure were not significantly different from those of the subjects who received the control treatment. The results thus supported the new Cognitive-Behavioral Theory and contradicted the traditional Trance State Theory of hypnosis. Two recent experiments, by De Stefano and by Katz, confirmed the above experimental results and offered further support for the Cognitive-Behavioral Theory. In both recent experiments, subjects randomly assigned to a "Think-With Instructions" treatment were more responsive to test-suggestions than those randomly assigned to a traditional trance-induction treatment.
Immunization of Epidemics in Multiplex Networks
Zhao, Dawei; Wang, Lianhai; Li, Shudong; Wang, Zhen; Wang, Lin; Gao, Bo
2014-01-01
Up to now, immunization of disease propagation has attracted great attention in both theoretical and experimental researches. However, vast majority of existing achievements are limited to the simple assumption of single layer networked population, which seems obviously inconsistent with recent development of complex network theory: each node could possess multiple roles in different topology connections. Inspired by this fact, we here propose the immunization strategies on multiplex networks, including multiplex node-based random (targeted) immunization and layer node-based random (targeted) immunization. With the theory of generating function, theoretical analysis is developed to calculate the immunization threshold, which is regarded as the most critical index for the effectiveness of addressed immunization strategies. Interestingly, both types of random immunization strategies show more efficiency in controlling disease spreading on multiplex Erdös-Rényi (ER) random networks; while targeted immunization strategies provide better protection on multiplex scale-free (SF) networks. PMID:25401755
Immunization of epidemics in multiplex networks.
Zhao, Dawei; Wang, Lianhai; Li, Shudong; Wang, Zhen; Wang, Lin; Gao, Bo
2014-01-01
Up to now, immunization of disease propagation has attracted great attention in both theoretical and experimental researches. However, vast majority of existing achievements are limited to the simple assumption of single layer networked population, which seems obviously inconsistent with recent development of complex network theory: each node could possess multiple roles in different topology connections. Inspired by this fact, we here propose the immunization strategies on multiplex networks, including multiplex node-based random (targeted) immunization and layer node-based random (targeted) immunization. With the theory of generating function, theoretical analysis is developed to calculate the immunization threshold, which is regarded as the most critical index for the effectiveness of addressed immunization strategies. Interestingly, both types of random immunization strategies show more efficiency in controlling disease spreading on multiplex Erdös-Rényi (ER) random networks; while targeted immunization strategies provide better protection on multiplex scale-free (SF) networks.
NASA Astrophysics Data System (ADS)
Egli, R.; Zhao, X.
2015-04-01
We present a general theory for the acquisition of natural remanent magnetizations (NRM) in sediment under the influence of (a) magnetic torques, (b) randomizing torques, and (c) torques resulting from interaction forces. Dynamic equilibrium between (a) and (b) in the water column and at the sediment-water interface generates a detrital remanent magnetization (DRM), while much stronger randomizing torques may be provided by bioturbation inside the mixed layer. These generate a so-called mixed remanent magnetization (MRM), which is stabilized by mechanical interaction forces. During the time required to cross the surface mixed layer, DRM is lost and MRM is acquired at a rate that depends on bioturbation intensity. Both processes are governed by a MRM lock-in function. The final NRM intensity is controlled mainly by a single parameter γ that is defined as the product of rotational diffusion and mixed-layer thickness, divided by sedimentation rate. This parameter defines three regimes: (1) slow mixing (γ < 0.2) leading to DRM preservation and insignificant MRM acquisition, (2) fast mixing (γ > 10) with MRM acquisition and full DRM randomization, and (3) intermediate mixing. Because the acquisition efficiency of DRM is larger than that of MRM, NRM intensity is particularly sensitive to γ in case of mixed regimes, generating variable NRM acquisition efficiencies. This model explains (1) lock-in delays that can be matched with empirical reconstructions from paleomagnetic records, (2) the existence of small lock-in depths that lead to DRM preservation, (3) specific NRM acquisition efficiencies of magnetofossil-rich sediments, and (4) some relative paleointensity artifacts.
Lancarotte, Inês; Nobre, Moacyr Roberto
2016-01-01
The aim of this study was to identify and reflect on the methods employed by studies focusing on intervention programs for the primordial and primary prevention of cardiovascular diseases. The PubMed, EMBASE, SciVerse Hub-Scopus, and Cochrane Library electronic databases were searched using the terms ‘effectiveness AND primary prevention AND risk factors AND cardiovascular diseases’ for systematic reviews, meta-analyses, randomized clinical trials, and controlled clinical trials in the English language. A descriptive analysis of the employed strategies, theories, frameworks, applied activities, and measurement of the variables was conducted. Nineteen primary studies were analyzed. Heterogeneity was observed in the outcome evaluations, not only in the selected domains but also in the indicators used to measure the variables. There was also a predominance of repeated cross-sectional survey design, differences in community settings, and variability related to the randomization unit when randomization was implemented as part of the sample selection criteria; furthermore, particularities related to measures, limitations, and confounding factors were observed. The employed strategies, including their advantages and limitations, and the employed theories and frameworks are discussed, and risk communication, as the key element of the interventions, is emphasized. A methodological process of selecting and presenting the information to be communicated is recommended, and a systematic theoretical perspective to guide the communication of information is advised. The risk assessment concept, its essential elements, and the relevant role of risk perception are highlighted. It is fundamental for communication that statements targeting other people’s understanding be prepared using systematic data. PMID:27982169
Kozma, Robert; Freeman, Walter J.
2017-01-01
Measurements of local field potentials over the cortical surface and the scalp of animals and human subjects reveal intermittent bursts of beta and gamma oscillations. During the bursts, narrow-band metastable amplitude modulation (AM) patters emerge for a fraction of a second and ultimately dissolve to the broad-band random background activity. The burst process depends on previously learnt conditioned stimuli (CS), thus different AM patterns may emerge in response to different CS. This observation leads to our cinematic theory of cognition when perception happens in discrete steps manifested in the sequence of AM patterns. Our article summarizes findings in the past decades on experimental evidence of cinematic theory of cognition and relevant mathematical models. We treat cortices as dissipative systems that self-organize themselves near a critical level of activity that is a non-equilibrium metastable state. Criticality is arguably a key aspect of brains in their rapid adaptation, reconfiguration, high storage capacity, and sensitive response to external stimuli. Self-organized criticality (SOC) became an important concept to describe neural systems. We argue that transitions from one AM pattern to the other require the concept of phase transitions, extending beyond the dynamics described by SOC. We employ random graph theory (RGT) and percolation dynamics as fundamental mathematical approaches to model fluctuations in the cortical tissue. Our results indicate that perceptions are formed through a phase transition from a disorganized (high entropy) to a well-organized (low entropy) state, which explains the swiftness of the emergence of the perceptual experience in response to learned stimuli. PMID:28352218
Adding statistical regularity results in a global slowdown in visual search.
Vaskevich, Anna; Luria, Roy
2018-05-01
Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance. Copyright © 2018 Elsevier B.V. All rights reserved.
Effects of coarse-graining on fluctuations in gene expression
NASA Astrophysics Data System (ADS)
Pedraza, Juan; Paulsson, Johan
2008-03-01
Many cellular components are present in such low numbers per cell that random births and deaths of individual molecules can cause significant `noise' in concentrations. But biochemical events do not necessarily occur in steps of individual molecules. Some processes are greatly randomized when synthesis or degradation occurs in large bursts of many molecules in a short time interval. Conversely, each birth or death of a macromolecule could involve several small steps, creating a memory between individual events. Here we present generalized theory for stochastic gene expression, formulating the variance in protein abundance in terms of the randomness of the individual events, and discuss the effective coarse-graining of the molecular hardware. We show that common molecular mechanisms produce gestation and senescence periods that can reduce noise without changing average abundances, lifetimes, or any concentration-dependent control loops. We also show that single-cell experimental methods that are now commonplace in cell biology do not discriminate between qualitatively different stochastic principles, but that this in turn makes them better suited for identifying which components introduce fluctuations.
NASA Astrophysics Data System (ADS)
Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui
2018-04-01
Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.
A simple model for pollen-parent fecundity distributions in bee-pollinated forage legume polycrosses
USDA-ARS?s Scientific Manuscript database
Random mating or panmixis is a fundamental assumption in quantitative genetic theory. Random mating is sometimes thought to occur in actual fact although a large body of empirical work shows that this is often not the case in nature. Models have been developed to model many non-random mating phenome...
Chaos Modeling: Increasing Educational Researchers' Awareness of a New Tool.
ERIC Educational Resources Information Center
Bobner, Ronald F.; And Others
Chaos theory is being used as a tool to study a wide variety of phenomena. It is a philosophical and empirical approach that attempts to explain relationships previously thought to be totally random. Although some relationships are truly random, many data appear to be random but reveal repeatable patterns of behavior under further investigation.…
Simple Emergent Power Spectra from Complex Inflationary Physics
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David
2016-09-01
We construct ensembles of random scalar potentials for Nf-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For Nf=O (few ), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For Nf≫1 , the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large Nf universality of random matrix theory.
Simple Emergent Power Spectra from Complex Inflationary Physics.
Dias, Mafalda; Frazer, Jonathan; Marsh, M C David
2016-09-30
We construct ensembles of random scalar potentials for N_{f}-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For N_{f}=O(few), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For N_{f}≫1, the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large N_{f} universality of random matrix theory.
Symmetry breaking in tensor models
NASA Astrophysics Data System (ADS)
Benedetti, Dario; Gurau, Razvan
2015-11-01
In this paper we analyze a quartic tensor model with one interaction for a tensor of arbitrary rank. This model has a critical point where a continuous limit of infinitely refined random geometries is reached. We show that the critical point corresponds to a phase transition in the tensor model associated to a breaking of the unitary symmetry. We analyze the model in the two phases and prove that, in a double scaling limit, the symmetric phase corresponds to a theory of infinitely refined random surfaces, while the broken phase corresponds to a theory of infinitely refined random nodal surfaces. At leading order in the double scaling limit planar surfaces dominate in the symmetric phase, and planar nodal surfaces dominate in the broken phase.
Horizon in random matrix theory, the Hawking radiation, and flow of cold atoms.
Franchini, Fabio; Kravtsov, Vladimir E
2009-10-16
We propose a Gaussian scalar field theory in a curved 2D metric with an event horizon as the low-energy effective theory for a weakly confined, invariant random matrix ensemble (RME). The presence of an event horizon naturally generates a bath of Hawking radiation, which introduces a finite temperature in the model in a nontrivial way. A similar mapping with a gravitational analogue model has been constructed for a Bose-Einstein condensate (BEC) pushed to flow at a velocity higher than its speed of sound, with Hawking radiation as sound waves propagating over the cold atoms. Our work suggests a threefold connection between a moving BEC system, black-hole physics and unconventional RMEs with possible experimental applications.
Effective-medium theory of elastic waves in random networks of rods.
Katz, J I; Hoffman, J J; Conradi, M S; Miller, J G
2012-06-01
We formulate an effective medium (mean field) theory of a material consisting of randomly distributed nodes connected by straight slender rods, hinged at the nodes. Defining wavelength-dependent effective elastic moduli, we calculate both the static moduli and the dispersion relations of ultrasonic longitudinal and transverse elastic waves. At finite wave vector k the waves are dispersive, with phase and group velocities decreasing with increasing wave vector. These results are directly applicable to networks with empty pore space. They also describe the solid matrix in two-component (Biot) theories of fluid-filled porous media. We suggest the possibility of low density materials with higher ratios of stiffness and strength to density than those of foams, aerogels, or trabecular bone.
Scattering from randomly oriented circular discs with application to vegetation
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1984-01-01
A vegetation layer is modeled by a collection of randomly oriented circular discs over a half space. The backscattering coefficient from such a half space is computed using the radiative transfer theory. It is shown that significantly different results are obtained from this theory as compared with some earlier investigations using the same modeling approach but with restricted disc orientations. In particular, the backscattered cross polarized returns cannot have a fast increasing angular trend which is inconsistent with measurements. By setting the appropriate angle of orientation to zero the theory reduces to previously published results. Comparisons are shown with measurements taken from milo, corn and wheat and good agreements are obtained for both polarized and cross polarized returns.
Scattering from randomly oriented circular discs with application to vegetation
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1983-01-01
A vegetation layer is modeled by a collection of randomly oriented circular discs over a half space. The backscattering coefficient from such a half space is computed using the radiative transfer theory. It is shown that significantly different results are obtained from this theory as compared with some earlier investigations using the same modeling approach but with restricted disc orientations. In particular, the backscattered cross-polarized returns cannot have a fast increasing angular trend which is inconsistent with measurements. By setting the appropriate angle of orientation to zero the theory reduces to previously published results. Comparisons are shown with measurements taken from milo, corn and wheat and good agreements are obtained for both polarized and cross-polarized returns.
NASA Technical Reports Server (NTRS)
Goldstein, M. L.
1976-01-01
The propagation of charged particles through interstellar and interplanetary space has often been described as a random process in which the particles are scattered by ambient electromagnetic turbulence. In general, this changes both the magnitude and direction of the particles' momentum. Some situations for which scattering in direction (pitch angle) is of primary interest were studied. A perturbed orbit, resonant scattering theory for pitch-angle diffusion in magnetostatic turbulence was slightly generalized and then utilized to compute the diffusion coefficient for spatial propagation parallel to the mean magnetic field, Kappa. All divergences inherent in the quasilinear formalism when the power spectrum of the fluctuation field falls off as K to the minus Q power (Q less than 2) were removed. Various methods of computing Kappa were compared and limits on the validity of the theory discussed. For Q less than 1 or 2, the various methods give roughly comparable values of Kappa, but use of perturbed orbits systematically results in a somewhat smaller Kappa than can be obtained from quasilinear theory.
Courneya, Kerry S; Friedenreich, Christine M; Sela, Rami A; Quinney, H Arthur; Rhodes, Ryan E; Jones, Lee W
2004-01-01
The purpose of this study was to examine postprogram exercise motivation and adherence in cancer survivors who participated in the Group Psychotherapy and Home-Based Physical Exercise (GROUP-HOPE; Courneya, Friedenreich, Sela, Quinney, & Rhodes, 2002) trial. At the completion of the GROUP-HOPE trial, 46 of 51 (90%) participants in the exercise group completed measures of attribution theory constructs. A 5-week follow-up self-report of exercise was then completed by 30 (65%) participants. Correlational analyses indicated that program exercise, perceived success, expected success, and affective reactions were strong predictors of postprogram exercise. In multivariate stepwise regression analyses, program exercise and perceived success were the strongest predictors of postprogram exercise. Additionally, perceived success was more important than objective success in understanding the attribution process, and it interacted with personal control to influence expected success and negative affect. Finally, postprogram quality of life and changes in physical fitness were correlates of perceived success. We concluded that attribution theory may have utility for understanding postprogram exercise motivation and adherence in cancer survivors.
Optimal Control of a Surge-Mode WEC in Random Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertok, Allan; Ceberio, Olivier; Staby, Bill
2016-08-30
The objective of this project was to develop one or more real-time feedback and feed-forward (MPC) control algorithms for an Oscillating Surge Wave Converter (OSWC) developed by RME called SurgeWEC™ that leverages recent innovations in wave energy converter (WEC) control theory to maximize power production in random wave environments. The control algorithms synthesized innovations in dynamic programming and nonlinear wave dynamics using anticipatory wave sensors and localized sensor measurements; e.g. position and velocity of the WEC Power Take Off (PTO), with predictive wave forecasting data. The result was an advanced control system that uses feedback or feed-forward data from anmore » array of sensor channels comprised of both localized and deployed sensors fused into a single decision process that optimally compensates for uncertainties in the system dynamics, wave forecasts, and sensor measurement errors.« less
Fluorescence correlation spectroscopy: the case of subdiffusion.
Lubelski, Ariel; Klafter, Joseph
2009-03-18
The theory of fluorescence correlation spectroscopy is revisited here for the case of subdiffusing molecules. Subdiffusion is assumed to stem from a continuous-time random walk process with a fat-tailed distribution of waiting times and can therefore be formulated in terms of a fractional diffusion equation (FDE). The FDE plays the central role in developing the fluorescence correlation spectroscopy expressions, analogous to the role played by the simple diffusion equation for regular systems. Due to the nonstationary nature of the continuous-time random walk/FDE, some interesting properties emerge that are amenable to experimental verification and may help in discriminating among subdiffusion mechanisms. In particular, the current approach predicts 1), a strong dependence of correlation functions on the initial time (aging); 2), sensitivity of correlation functions to the averaging procedure, ensemble versus time averaging (ergodicity breaking); and 3), that the basic mean-squared displacement observable depends on how the mean is taken.
Gender Pay Equity in Higher Education: Salary Differentials and Predictors of Base Faculty Income
ERIC Educational Resources Information Center
Meyers, Laura E.
2011-01-01
This study investigates faculty gender pay equity in higher education. Using data from the 2004 National Study of Postsecondary Faculty and drawing on human capital theory, structural theory, and the theory of comparable worth, this study uses cross-classified random effects modeling to explore what factors may be contributing to the pay…
Apker Award Recipient: Renormalization-Group Study of Helium Mixtures Immersed in a Porous Medium
NASA Astrophysics Data System (ADS)
Lopatnikova, Anna
1998-03-01
Superfluidity and phase separation in ^3He-^4He mixtures immersed in aerogel are studied by renormalization-group theory. Firstly, the theory is applied to jungle-gym (non-random) aerogel.(A. Lopatnikova and A.N. Berker, Phys. Rev. B 55, 3798 (1997).) This calculation is conducted via the coupled renormalization-group mappings of interactions near and away from aerogel. Superfluidity at very low ^4He concentrations and a depressed tricritical temperature are found at the onset of superfludity. A superfluid-superfluid phase separation, terminating at an isolated critical point, is found entirely within the superfluid phase. Secondly, the theory is applied to true aerogel, which has quenched disorder at both atomic and geometric levels.(A. Lopatnikova and A.N. Berker, Phys. Rev. B 56, 11865 (1997).) This calculation is conducted via the coupled renormalization-group mappings, near and away from aerogel, of quenched probability distributions of random interactions. Random-bond effects on superfluidity onset and random-field effects on superfluid phase separation are seen. The quenched randomness causes the λ line of second-order phase transitions of superfluidity onset to reach zero temperature, in agreement with general prediction and experiments. Based on these studies, the experimentally observed(S.B. Kim, J. Ma, and M.H.W. Chan, Phys. Rev. Lett. 71, 2268 (1993); N. Mulders and M.H.W. Chan, Phys. Rev. Lett. 75, 3705 (1995).) distinctive characteristics of ^3He-^4He mixtures in aerogel are related to the aerogel properties of connectivity, tenuousness, and atomic and geometric randomness.
Large Fluctuations for Spatial Diffusion of Cold Atoms
NASA Astrophysics Data System (ADS)
Aghion, Erez; Kessler, David A.; Barkai, Eli
2017-06-01
We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.
The fuzzy cube and causal efficacy: representation of concomitant mechanisms in stroke.
Jobe, Thomas H.; Helgason, Cathy M.
1998-04-01
Twentieth century medical science has embraced nineteenth century Boolean probability theory based upon two-valued Aristotelian logic. With the later addition of bit-based, von Neumann structured computational architectures, an epistemology based on randomness has led to a bivalent epidemiological methodology that dominates medical decision making. In contrast, fuzzy logic, based on twentieth century multi-valued logic, and computational structures that are content addressed and adaptively modified, has advanced a new scientific paradigm for the twenty-first century. Diseases such as stroke involve multiple concomitant causal factors that are difficult to represent using conventional statistical methods. We tested which paradigm best represented this complex multi-causal clinical phenomenon-stroke. We show that the fuzzy logic paradigm better represented clinical complexity in cerebrovascular disease than current probability theory based methodology. We believe this finding is generalizable to all of clinical science since multiple concomitant causal factors are involved in nearly all known pathological processes.
NASA Astrophysics Data System (ADS)
Fiori, A.; Cvetkovic, V.; Dagan, G.; Attinger, S.; Bellin, A.; Dietrich, P.; Zech, A.; Teutsch, G.
2016-12-01
The emergence of stochastic subsurface hydrology stemmed from the realization that the random spatial variability of aquifer properties has a profound impact on solute transport. The last four decades witnessed a tremendous expansion of the discipline, many fundamental processes and principal mechanisms being identified. However, the research findings have not impacted significantly the application in practice, for several reasons which are discussed. The paper discusses the current status of stochastic subsurface hydrology, the relevance of the scientific results for applications and it also provides a perspective to a few possible future directions. In particular, we discuss how the transfer of knowledge can be facilitated by identifying clear goals for characterization and modeling application, relying on recent recent advances in research in these areas.
Cuckoos, cowbirds and hosts: adaptations, trade-offs and constraints.
Krüger, Oliver
2007-10-29
The interactions between brood parasitic birds and their host species provide one of the best model systems for coevolution. Despite being intensively studied, the parasite-host system provides ample opportunities to test new predictions from both coevolutionary theory as well as life-history theory in general. I identify four main areas that might be especially fruitful: cuckoo female gentes as alternative reproductive strategies, non-random and nonlinear risks of brood parasitism for host individuals, host parental quality and targeted brood parasitism, and differences and similarities between predation risk and parasitism risk. Rather than being a rare and intriguing system to study coevolutionary processes, I believe that avian brood parasites and their hosts are much more important as extreme cases in the evolution of life-history strategies. They provide unique examples of trade-offs and situations where constraints are either completely removed or particularly severe.
Perceptual expertise and top-down expectation of musical notation engages the primary visual cortex.
Wong, Yetta Kwailing; Peng, Cynthia; Fratus, Kristyn N; Woodman, Geoffrey F; Gauthier, Isabel
2014-08-01
Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40-60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.
Navier-Stokes simulation of the crossflow instability in swept-wing flows
NASA Technical Reports Server (NTRS)
Reed, Helen L.
1989-01-01
The computational modeling of the transition process characteristic of flows over swept wings are described. Specifically, the crossflow instability and crossflow/T-S wave interactions are analyzed through the numerical solution of the full three-dimensional Navier-Stokes equations including unsteadiness, curvature, and sweep. This approach is chosen because of the complexity of the problem and because it appears that linear stability theory is insufficient to explain the discrepancies between different experiments and between theory and experiments. The leading edge region of a swept wing is considered in a three-dimensional spatial simulation with random disturbances as the initial conditions. The work has been closely coordinated with the experimental program of Professor William Saric, examining the same problem. Comparisons with NASA flight test data and the experiments at Arizona State University were a necessary and an important integral part of this work.
Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving
Semeniuk, Yulia Yuriyivna; Brown, Roger L.; Riesch, Susan K.
2016-01-01
We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem solving skill. The intervention is based on the Circumplex Model and Social Problem Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. PMID:26936844
Increases in Tolerance within Naturalistic, Self-Help Recovery Homes
Olson, Brad D.; Jason, Leonard A.; Davidson, Michelle; Ferrari, Joseph R.
2011-01-01
Changes in tolerance toward others (i.e., universality/diversity measure) among 150 participants (93 women, 57 men) discharged from inpatient treatment centers randomly assigned to either a self-help, communal living setting or usual after-care and interviewed every 6 months for a 24 month period was explored. Hierarchical Linear Modeling examined the effect of condition (Therapeutic Communal Living versus Usual Care) and other moderator variables on wave trajectories of tolerance attitudes (i.e., universality/diversity scores). Over time, residents of the communal living recovery model showed significantly greater tolerance trajectories than usual care participants. Results supported the claim that residents of communal living settings unit around super-ordinate goals of overcoming substance abuse problems. Also older compared to younger residents living in a house for 6 or more months experienced the greatest increases in tolerance. Theories regarding these differential increases in tolerance, such as social contact theory and transtheoretical processes of change, are discussed. PMID:19838787
Pervasive randomness in physics: an introduction to its modelling and spectral characterisation
NASA Astrophysics Data System (ADS)
Howard, Roy
2017-10-01
An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.
Donovan, Jenny L; de Salis, Isabel; Toerien, Merran; Paramasivan, Sangeetha; Hamdy, Freddie C; Blazeby, Jane M
2014-08-01
The aim of the study was to investigate how doctors considered and experienced the concept of equipoise while recruiting patients to randomized controlled trials (RCTs). In-depth interviews with 32 doctors in six publicly funded pragmatic RCTs explored their perceptions of equipoise as they undertook RCT recruitment. The RCTs varied in size, duration, type of complex intervention, and clinical specialties. Interview data were analyzed using qualitative content and thematic analytical methods derived from grounded theory and synthesized across six RCTs. All six RCTs suffered from poor recruitment. Doctors wanted to gather robust evidence but experienced considerable discomfort and emotion in relation to their clinical instincts and concerns about patient eligibility and safety. Although they relied on a sense of community equipoise to justify participation, most acknowledged having "hunches" about particular treatments and patients, some of which undermined recruitment. Surgeons experienced these issues most intensely. Training and support promoted greater confidence in equipoise and improved engagement and recruitment. Recruitment to RCTs is a fragile process and difficult for doctors intellectually and emotionally. Training and support can enable most doctors to become comfortable with key RCT concepts including equipoise, uncertainty, patient eligibility, and randomization, promoting a more resilient recruitment process in partnership with patients. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Bluethmann, Shirley M.; Bartholomew, L. Kay; Murphy, Caitlin C.; Vernon, Sally W.
2017-01-01
Objective Theory use may enhance effectiveness of behavioral interventions, yet critics question whether theory-based interventions have been sufficiently scrutinized. This study applied a framework to evaluate theory use in physical activity interventions for breast cancer survivors. The aims were to (1) evaluate theory application intensity and (2) assess the association between extensiveness of theory use and intervention effectiveness. Methods Studies were previously identified through a systematic search, including only randomized controlled trials published from 2005 to 2013, that addressed physical activity behavior change and studied survivors who were <5 years posttreatment. Eight theory items from Michie and Prestwich’s coding framework were selected to calculate theory intensity scores. Studies were classified into three subgroups based on extensiveness of theory use (Level 1 = sparse; Level 2 = moderate; and Level 3 = extensive). Results Fourteen randomized controlled trials met search criteria. Most trials used the transtheoretical model (n = 5) or social cognitive theory (n = 3). For extensiveness of theory use, 5 studies were classified as Level 1, 4 as Level 2, and 5 as Level 3. Studies in the extensive group (Level 3) had the largest overall effect size (g = 0.76). Effects were more modest in Level 1 and 2 groups with overall effect sizes of g = 0.28 and g = 0.36, respectively. Conclusions Theory use is often viewed as essential to behavior change, but theory application varies widely. In this study, there was some evidence to suggest that extensiveness of theory use enhanced intervention effectiveness. However, there is more to learn about how theory can improve interventions for breast cancer survivors. PMID:27226430
Coherent backscattering of light by complex random media of spherical scatterers: numerical solution
NASA Astrophysics Data System (ADS)
Muinonen, Karri
2004-07-01
Novel Monte Carlo techniques are described for the computation of reflection coefficient matrices for multiple scattering of light in plane-parallel random media of spherical scatterers. The present multiple scattering theory is composed of coherent backscattering and radiative transfer. In the radiative transfer part, the Stokes parameters of light escaping from the medium are updated at each scattering process in predefined angles of emergence. The scattering directions at each process are randomized using probability densities for the polar and azimuthal scattering angles: the former angle is generated using the single-scattering phase function, whereafter the latter follows from Kepler's equation. For spherical scatterers in the Rayleigh regime, randomization proceeds semi-analytically whereas, beyond that regime, cubic spline presentation of the scattering matrix is used for numerical computations. In the coherent backscattering part, the reciprocity of electromagnetic waves in the backscattering direction allows the renormalization of the reversely propagating waves, whereafter the scattering characteristics are computed in other directions. High orders of scattering (~10 000) can be treated because of the peculiar polarization characteristics of the reverse wave: after a number of scatterings, the polarization state of the reverse wave becomes independent of that of the incident wave, that is, it becomes fully dictated by the scatterings at the end of the reverse path. The coherent backscattering part depends on the single-scattering albedo in a non-monotonous way, the most pronounced signatures showing up for absorbing scatterers. The numerical results compare favourably to the literature results for nonabsorbing spherical scatterers both in and beyond the Rayleigh regime.
De Vries, A
1984-01-01
Darwin's theory of evolution by natural selection on the basis of inherited random individual variation and excessive offspring remains controversial. Arguments derive from religion (creationism)--disagreement with the Darwinian reduction of teleology to physical causation, from science--nonselective mechanism, and from logic--negation of the possibility of proof of any scientific theory allowing only for validity thusfar. Experimentation on Darwinism is beset with a practical obstacle--unattainable duration, and a logical one--nonpredictability inherent in the randomness of variation. Evolution and creationism need not be contradictive if viewed in their separate domains--rational versus miraculous.
Schumacher, Sophie; Kemps, Eva; Tiggemann, Marika
2017-06-01
The elaborated-intrusion theory of desire proposes that craving is a two-stage process whereby initial intrusions about a desired target are subsequently elaborated with mental imagery. The present study tested whether the craving reduction strategies of cognitive defusion and guided imagery could differentially target the intrusion and elaboration stages, respectively, and thus differentially impact the craving process. Participants were randomly assigned to a cognitive defusion, a guided imagery or a mind-wandering control condition. Pre- and post-intervention chocolate-related thoughts, intrusiveness of thoughts, vividness of imagery, craving intensity, and chocolate consumption were compared. Experiment 1 recruited a general sample of young women (n = 94), whereas Experiment 2 recruited a sample of chocolate cravers who wanted to reduce their chocolate consumption (n = 97). Across both experiments, cognitive defusion lowered intrusiveness of thoughts, vividness of imagery and craving intensity. Guided imagery reduced chocolate-related thoughts, intrusiveness, vividness and craving intensity for chocolate cravers (Experiment 2), but not for the general sample (Experiment 1). There were no group differences in chocolate consumption in either experiment. Results add to existing evidence supporting the elaborated-intrusion theory of desire in the food domain, and suggest that acceptance- and imagery-based techniques have potential for use in combatting problematic cravings. Copyright © 2017 Elsevier Ltd. All rights reserved.
From human behavior to the spread of mobile phone viruses
NASA Astrophysics Data System (ADS)
Wang, Pu
Percolation theory was initiated some 50 years ago as a mathematical framework for the study of random physical processes such as the flow of a fluid through a disordered porous medium. It has been proved to be a remarkably rich theory, with applications from thermodynamic phase transitions to complex networks. In this dissertation percolation theory is used to study the diffusion process of mobile phone viruses. Some methodologies widely used in statistical physics are also applied to uncover the underlying statistical laws of human behavior and simulate the spread of mobile phone viruses in a large population. I find that while Bluetooth viruses can reach all susceptible handsets with time, they spread slowly due to human mobility, offering ample opportunities to deploy antiviral software. In contrast, viruses utilizing multimedia messaging services (MMS) could infect all users in hours, but currently a phase transition on the underlying call graph limits them to only a small fraction of the susceptible users. These results explain the lack of a major mobile virus breakout so far and predict that once a mobile operating system's market share reaches the phase transition point, viruses will pose a serious threat to mobile communications. These studies show how the large datasets and tools of statistical physics can be used to study some specific and important problems, such as the spread of mobile phone viruses.
Autoimmunity: a decision theory model.
Morris, J A
1987-01-01
Concepts from statistical decision theory were used to analyse the detection problem faced by the body's immune system in mounting immune responses to bacteria of the normal body flora. Given that these bacteria are potentially harmful, that there can be extensive cross reaction between bacterial antigens and host tissues, and that the decisions are made in uncertainty, there is a finite chance of error in immune response leading to autoimmune disease. A model of ageing in the immune system is proposed that is based on random decay in components of the decision process, leading to a steep age dependent increase in the probability of error. The age incidence of those autoimmune diseases which peak in early and middle life can be explained as the resultant of two processes: an exponentially falling curve of incidence of first contact with common bacteria, and a rapidly rising error function. Epidemiological data on the variation of incidence with social class, sibship order, climate and culture can be used to predict the likely site of carriage and mode of spread of the causative bacteria. Furthermore, those autoimmune diseases precipitated by common viral respiratory tract infections might represent reactions to nasopharyngeal bacterial overgrowth, and this theory can be tested using monoclonal antibodies to search the bacterial isolates for cross reacting antigens. If this model is correct then prevention of autoimmune disease by early exposure to low doses of bacteria might be possible. PMID:3818985
Time-evolution of grain size distributions in random nucleation and growth crystallization processes
NASA Astrophysics Data System (ADS)
Teran, Anthony V.; Bill, Andreas; Bergmann, Ralf B.
2010-02-01
We study the time dependence of the grain size distribution N(r,t) during crystallization of a d -dimensional solid. A partial differential equation, including a source term for nuclei and a growth law for grains, is solved analytically for any dimension d . We discuss solutions obtained for processes described by the Kolmogorov-Avrami-Mehl-Johnson model for random nucleation and growth (RNG). Nucleation and growth are set on the same footing, which leads to a time-dependent decay of both effective rates. We analyze in detail how model parameters, the dimensionality of the crystallization process, and time influence the shape of the distribution. The calculations show that the dynamics of the effective nucleation and effective growth rates play an essential role in determining the final form of the distribution obtained at full crystallization. We demonstrate that for one class of nucleation and growth rates, the distribution evolves in time into the logarithmic-normal (lognormal) form discussed earlier by Bergmann and Bill [J. Cryst. Growth 310, 3135 (2008)]. We also obtain an analytical expression for the finite maximal grain size at all times. The theory allows for the description of a variety of RNG crystallization processes in thin films and bulk materials. Expressions useful for experimental data analysis are presented for the grain size distribution and the moments in terms of fundamental and measurable parameters of the model.
NASA Astrophysics Data System (ADS)
Cauffriez, Laurent
2017-01-01
This paper deals with the modeling of a random failures process of a Safety Instrumented System (SIS). It aims to identify the expected number of failures for a SIS during its lifecycle. Indeed, the fact that the SIS is a system being tested periodically gives the idea to apply Bernoulli trials to characterize the random failure process of a SIS and thus to verify if the PFD (Probability of Failing Dangerously) experimentally obtained agrees with the theoretical one. Moreover, the notion of "odds on" found in Bernoulli theory allows engineers and scientists determining easily the ratio between “outcomes with success: failure of SIS” and “outcomes with unsuccess: no failure of SIS” and to confirm that SIS failures occur sporadically. A Stochastic P-temporised Petri net is proposed and serves as a reference model for describing the failure process of a 1oo1 SIS architecture. Simulations of this stochastic Petri net demonstrate that, during its lifecycle, the SIS is rarely in a state in which it cannot perform its mission. Experimental results are compared to Bernoulli trials in order to validate the powerfulness of Bernoulli trials for the modeling of the failures process of a SIS. The determination of the expected number of failures for a SIS during its lifecycle opens interesting research perspectives for engineers and scientists by completing the notion of PFD.
Noise sensitivity of portfolio selection in constant conditional correlation GARCH models
NASA Astrophysics Data System (ADS)
Varga-Haszonits, I.; Kondor, I.
2007-11-01
This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.
Developing and executing quality improvement projects (concept, methods, and evaluation).
Likosky, Donald S
2014-03-01
Continuous quality improvement, quality assurance, cycles of change--these words of often used to express the process of using data to inform and improve clinical care. Although many of us have been exposed to theories and practice of experimental work (e.g., randomized trial), few of us have been similarly exposed to the science underlying quality improvement. Through the lens of a single-center quality improvement study, this article exposes the reader to methodology for conducting such studies. The reader will gain an understanding of these methods required to embark on such a study.
Constraints on the invariant functions of axisymmetric turbulence
NASA Technical Reports Server (NTRS)
Kerschen, E. J.
1983-01-01
Constraints are derived for the two invariant functions Q1 and Q2 that occur in Chandrasekhar's (1950) development of the axisymmetric turbulence theory. These constraints must be satisfied for the correlation tensor derived from Q1 and Q2 to be that of a stationary random process, i.e., for the turbulence to be realizable. The equivalent results in spectrum space are also developed. Applications of the constraints in aerodynamic noise modeling are discussed. It is shown that significant errors in prediction can be introduced by the use of turbulence models which violate the constraints.
Stochastic differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sobczyk, K.
1990-01-01
This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshoremore » structures.« less
2007-01-01
The idea of quantum entanglement is borrowed from physics and developed into an algebraic argument to explain how double-blinding randomized controlled trials could lead to failure to provide unequivocal evidence for the efficacy of homeopathy, and inability to distinguish proving and placebo groups in homeopathic pathogenic trials. By analogy with the famous double-slit experiment of quantum physics, and more modern notions of quantum information processing, these failings are understood as blinding causing information loss resulting from a kind of quantum superposition between the remedy and placebo. PMID:17342236
Mann, Courtney M; Ward, Dianne S; Vaughn, Amber; Benjamin Neelon, Sara E; Long Vidal, Lenita J; Omar, Sakinah; Namenek Brouwer, Rebecca J; Østbye, Truls
2015-12-10
Many families rely on child care outside the home, making these settings important influences on child development. Nearly 1.5 million children in the U.S. spend time in family child care homes (FCCHs), where providers care for children in their own residences. There is some evidence that children in FCCHs are heavier than those cared for in centers. However, few interventions have targeted FCCHs for obesity prevention. This paper will describe the application of the Intervention Mapping (IM) framework to the development of a childhood obesity prevention intervention for FCCHs Following the IM protocol, six steps were completed in the planning and development of an intervention targeting FCCHs: needs assessment, formulation of change objectives matrices, selection of theory-based methods and strategies, creation of intervention components and materials, adoption and implementation planning, and evaluation planning Application of the IM process resulted in the creation of the Keys to Healthy Family Child Care Homes program (Keys), which includes three modules: Healthy You, Healthy Home, and Healthy Business. Delivery of each module includes a workshop, educational binder and tool-kit resources, and four coaching contacts. Social Cognitive Theory and Self-Determination Theory helped guide development of change objective matrices, selection of behavior change strategies, and identification of outcome measures. The Keys program is currently being evaluated through a cluster-randomized controlled trial The IM process, while time-consuming, enabled rigorous and systematic development of intervention components that are directly tied to behavior change theory and may increase the potential for behavior change within the FCCHs.
Development of a theory-guided pan-European computer-assisted safer sex intervention.
Nöstlinger, Christiana; Borms, Ruth; Dec-Pietrowska, Joanna; Dias, Sonia; Rojas, Daniela; Platteau, Tom; Vanden Berghe, Wim; Kok, Gerjo
2016-12-01
HIV is a growing public health problem in Europe, with men-having-sex-with-men and migrants from endemic regions as the most affected key populations. More evidence on effective behavioral interventions to reduce sexual risk is needed. This article describes the systematic development of a theory-guided computer-assisted safer sex intervention, aiming at supporting people living with HIV in sexual risk reduction. We applied the Intervention Mapping (IM) protocol to develop this counseling intervention in the framework of a European multicenter study. We conducted a needs assessment guided by the information-motivation-behavioral (IMB) skills model, formulated change objectives and selected theory-based methods and practical strategies, i.e. interactive computer-assisted modules as supporting tools for provider-delivered counseling. Theoretical foundations were the IMB skills model, social cognitive theory and the transtheoretical model, complemented by dual process models of affective decision making to account for the specifics of sexual behavior. The counseling approach for delivering three individual sessions was tailored to participants' needs and contexts, adopting elements of motivational interviewing and cognitive-behavioral therapy. We implemented and evaluated the intervention using a randomized controlled trial combined with a process evaluation. IM provided a useful framework for developing a coherent intervention for heterogeneous target groups, which was feasible and effective across the culturally diverse settings. This article responds to the need for transparent descriptions of the development and content of evidence-based behavior change interventions as potential pillars of effective combination prevention strategies. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Evidence-based interventions for reading and language difficulties: creating a virtuous circle.
Snowling, Margaret J; Hulme, Charles
2011-03-01
BACKGROUND. Children may experience two very different forms of reading problem: decoding difficulties (dyslexia) and reading comprehension difficulties. Decoding difficulties appear to be caused by problems with phonological (speech sound) processing. Reading comprehension difficulties in contrast appear to be caused by problems with 'higher level' language difficulties including problems with semantics (including deficient knowledge of word meanings) and grammar (knowledge of morphology and syntax). AIMS. We review evidence concerning the nature, causes of, and treatments for children's reading difficulties. We argue that any well-founded educational intervention must be based on a sound theory of the causes of a particular form of learning difficulty, which in turn must be based on an understanding of how a given skill is learned by typically developing children. Such theoretically motivated interventions should in turn be evaluated in randomized controlled trials (RCTs) to establish whether they are effective, and for whom. RESULTS. There is now considerable evidence showing that phonologically based interventions are effective in ameliorating children's word level decoding difficulties, and a smaller evidence base showing that reading and oral language (OL) comprehension difficulties can be ameliorated by suitable interventions to boost vocabulary and broader OL skills. CONCLUSIONS. The process of developing theories about the origins of children's educational difficulties and evaluating theoretically motivated treatments in RCTs, produces a 'virtuous circle' whereby theory informs practice, and the evaluation of effective interventions in turn feeds back to inform and refine theories about the nature and causes of children's reading and language difficulties. ©2010 The British Psychological Society.
Scruggs, Stacie; Mama, Scherezade K; Carmack, Cindy L; Douglas, Tommy; Diamond, Pamela; Basen-Engquist, Karen
2018-01-01
This study examined whether a physical activity intervention affects transtheoretical model (TTM) variables that facilitate exercise adoption in breast cancer survivors. Sixty sedentary breast cancer survivors were randomized to a 6-month lifestyle physical activity intervention or standard care. TTM variables that have been shown to facilitate exercise adoption and progress through the stages of change, including self-efficacy, decisional balance, and processes of change, were measured at baseline, 3 months, and 6 months. Differences in TTM variables between groups were tested using repeated measures analysis of variance. The intervention group had significantly higher self-efficacy ( F = 9.55, p = .003) and perceived significantly fewer cons of exercise ( F = 5.416, p = .025) at 3 and 6 months compared with the standard care group. Self-liberation, counterconditioning, and reinforcement management processes of change increased significantly from baseline to 6 months in the intervention group, and self-efficacy and reinforcement management were significantly associated with improvement in stage of change. The stage-based physical activity intervention increased use of select processes of change, improved self-efficacy, decreased perceptions of the cons of exercise, and helped participants advance in stage of change. These results point to the importance of using a theory-based approach in interventions to increase physical activity in cancer survivors.
ERIC Educational Resources Information Center
Chan, Randolph C. H.; Mak, Winnie W. S.; Pang, Ingrid H. Y.; Wong, Samuel Y. S.; Tang, Wai Kwong; Lau, Joseph T. F.; Woo, Jean; Lee, Diana T. F.; Cheung, Fanny M.
2018-01-01
The present study examined whether, when, and how motivational messaging can boost the response rate of postal surveys for physicians based on Higgin's regulatory focus theory, accounting for its cost-effectiveness. A three-arm, blinded, randomized controlled design was used. A total of 3,270 doctors were randomly selected from the registration…
Nakamura, Saki; Inayama, Takayo; Arao, Takashi
2017-01-13
Web-based nutritional education programmes appear to be comparable to those delivered face-to-face. However, no existing web-based nutrition education or similar programme has yet been evaluated with consideration of socio-economic status. The objective of a nutritional education programme of promoting vegetable intake designed a randomized controlled trial (RCT) is to evaluate the results of intervention and to determine how socio-economic status influences the programme effects. Participants will be randomly sampled individuals (aged 30-59) stratified according national population statistics for sex, age, and household income. Participants were consented to survey participation (n = 1500), and will be randomly divided into intervention and control groups. The intervention period is 5 weeks with one step of diet-related education per week. The main outcome of the programme is dietary behaviour as eating vegetable (350 g per day, five small bowl). To encourage behavioural changes, the programme contents are prepared using behavioural theories and techniques tailored to the assumed group stages of behavioural change. In the first step, we employ the health belief model to encourage a shift from the pre-contemplative to the contemplative phase; in the second and third steps, social cognitive theory is used to encourage transition to the preparatory phase; in the fourth step, social cognitive theory and strengthening social support are used to promote progression to the execution phase; finally, in the fifth step, strengthening social capital and social support are used to promote the shift to the maintenance phase. The baseline, post intervention and follow-up survey was assessed using a self-administered questionnaire. For process evaluation, we use five items relating to programme participation and satisfaction. A follow-up survey of participants will be carried out 3 months after intervention completion. The fact that this study is an RCT with an established control group is a strong advantage. Information and communications technology is not limited by time or place. If we could show this web-based nutrition education programmes has a positive effect, it may be an appropriate tool for reaching individuals in lower socio-economic state. Current Controlled Trials UMIN-ICDR UMIN 000019376 (Registered October 16, 2015).
Giguere, Anik M C; Lawani, Moulikatou Adouni; Fortier-Brochu, Émilie; Carmichael, Pierre-Hugues; Légaré, France; Kröger, Edeltraut; Witteman, Holly O; Voyer, Philippe; Caron, Danielle; Rodríguez, Charo
2018-06-25
The increasing prevalence of Alzheimer's disease and other forms of dementia raises new challenges to ensure that healthcare decisions are informed by research evidence and reflect what is important for seniors and their caregivers. Therefore, we aim to evaluate a tailored intervention to help healthcare providers empower seniors and their caregivers in making health-related decisions. In two phases, we will: (1) design and tailor the intervention; and (2) implement and evaluate it. We will use theory and user-centered design to tailor an intervention comprising a distance professional training program on shared decision-making and five shared decision-making tools dealing with difficult decisions often faced by seniors with dementia and their caregivers. Each tool will be designed in two versions, one for clinicians and one for patients. We will recruit 49 clinicians and 27 senior/caregiver to participate in three cycles of design-evaluation-feedback of each intervention components. Besides think-aloud and interview approaches, users will also complete questionnaires based on the Theory of Planned Behavior to identify the factors most likely to influence their adoption of shared decision-making after exposure to the intervention. We will then modify the intervention by adding/enhancing behavior-change techniques targeting these factors. We will evaluate the effectiveness of this tailored intervention before/after implementation, in a two-armed, clustered randomized trial. We will enroll a convenience sample of six primary care clinics (unit of randomization) in the province of Quebec and recruit the clinicians who practice there (mostly family physicians, nurses, and social workers). These clinics will then be randomized to immediate exposure to the intervention or delayed exposure. Overall, we will recruit 180 seniors with dementia, their caregivers, and their healthcare providers. We will evaluate the impact of the intervention on patient involvement in the decision-making process, decisional comfort, patient and caregiver personal empowerment in relation to their own healthcare, patient quality of life, caregiver burden, and decisional regret. The intervention will empower patients and their caregivers in their healthcare, by fostering their participation as partners during the decision-making process and by ensuring they make informed decisions congruent with their values and priorities. ClinicalTrials.org, NCT02956694 . Registered on 31 October 2016.
Andersen, Anette; Bast, Lotus Sofie; Ringgaard, Lene Winther; Wohllebe, Louise; Jensen, Poul Dengsøe; Svendsen, Maria; Dalum, Peter; Due, Pernille
2014-05-28
Adolescent smoking is still highly prevalent in Denmark. One in four 13-year olds indicates that they have tried to smoke, and one in four 15-year olds answer that they smoke regularly. Smoking is more prevalent in socioeconomically disadvantaged populations in Denmark as well as in most Western countries. Previous school-based programs to prevent smoking have shown contrasting results internationally. In Denmark, previous programs have shown limited or no effect. This indicates a need for developing a well-designed, comprehensive, and multi-component intervention aimed at Danish schools with careful implementation and thorough evaluation.This paper describes X:IT, a study including 1) the development of a 3-year school-based multi-component intervention and 2) the randomized trial investigating the effect of the intervention. The study aims at reducing the prevalence of smoking among 13 to 15-year olds by 25%. The X:IT study is based on the Theory of Triadic Influences. The theory organizes factors influencing adolescent smoking into three streams: cultural environment, social situation, and personal factors. We added a fourth stream, the community aspects. The X:IT program comprises three main components: 1) smoke-free school premises, 2) parental involvement including smoke-free dialogues and smoke-free contracts between students and parents, and 3) a curricular component. The study encompasses process- and effect-evaluations as well as health economic analyses. Ninety-four schools in 17 municipalities were randomly allocated to the intervention (51 schools) or control (43 schools) group. At baseline in September 2010, 4,468 year 7 students were eligible of which 4,167 answered the baseline questionnaire (response rate = 93.3%). The X:IT study is a large, randomized controlled trial evaluating the effect of an intervention, based on components proven to be efficient in other Nordic settings. The X:IT study directs students, their parents, and smoking prevention policies at the schools. These elements have proven to be effective tools in preventing smoking among adolescents. Program implementation is thoroughly evaluated to be able to add to the current knowledge of the importance of implementation. X:IT creates the basis for thorough effect and process evaluation, focusing on various social groups. Current Controlled Trials ISRCTN77415416.
Design of a school-based randomized trial to reduce smoking among 13 to 15-year olds, the X:IT study
2014-01-01
Background Adolescent smoking is still highly prevalent in Denmark. One in four 13-year olds indicates that they have tried to smoke, and one in four 15-year olds answer that they smoke regularly. Smoking is more prevalent in socioeconomically disadvantaged populations in Denmark as well as in most Western countries. Previous school-based programs to prevent smoking have shown contrasting results internationally. In Denmark, previous programs have shown limited or no effect. This indicates a need for developing a well-designed, comprehensive, and multi-component intervention aimed at Danish schools with careful implementation and thorough evaluation. This paper describes X:IT, a study including 1) the development of a 3-year school-based multi-component intervention and 2) the randomized trial investigating the effect of the intervention. The study aims at reducing the prevalence of smoking among 13 to 15-year olds by 25%. Methods/Design The X:IT study is based on the Theory of Triadic Influences. The theory organizes factors influencing adolescent smoking into three streams: cultural environment, social situation, and personal factors. We added a fourth stream, the community aspects. The X:IT program comprises three main components: 1) smoke-free school premises, 2) parental involvement including smoke-free dialogues and smoke-free contracts between students and parents, and 3) a curricular component. The study encompasses process- and effect-evaluations as well as health economic analyses. Ninety-four schools in 17 municipalities were randomly allocated to the intervention (51 schools) or control (43 schools) group. At baseline in September 2010, 4,468 year 7 students were eligible of which 4,167 answered the baseline questionnaire (response rate = 93.3%). Discussion The X:IT study is a large, randomized controlled trial evaluating the effect of an intervention, based on components proven to be efficient in other Nordic settings. The X:IT study directs students, their parents, and smoking prevention policies at the schools. These elements have proven to be effective tools in preventing smoking among adolescents. Program implementation is thoroughly evaluated to be able to add to the current knowledge of the importance of implementation. X:IT creates the basis for thorough effect and process evaluation, focusing on various social groups. Trial registration Current Controlled Trials ISRCTN77415416. PMID:24886206
ERIC Educational Resources Information Center
O'Connor, Thomas G.; Matias, Carla; Futh, Annabel; Tantam, Grace; Scott, Stephen
2013-01-01
Parenting programs for school-aged children are typically based on behavioral principles as applied in social learning theory. It is not yet clear if the benefits of these interventions extend beyond aspects of the parent-child relationship quality conceptualized by social learning theory. The current study examined the extent to which a social…
Lucas, Todd; Lumley, Mark A.; Flack, John M.; Wegner, Rhiana; Pierce, Jennifer; Goetz, Stefan
2016-01-01
Objective According to worldview verification theory, inconsistencies between lived experiences and worldviews are psychologically threatening. These inconsistencies may be key determinants of stress processes that influence cardiovascular health disparities. This preliminary examination considers how experiencing injustice can affect perceived racism and biological stress reactivity among African Americans. Guided by worldview verification theory, it was hypothesized that responses to receiving an unfair outcome would be moderated by fairness of the accompanying decision process, and that this effect would further depend on the consistency of the decision process with preexisting justice beliefs. Method A sample of 118 healthy African American adults completed baseline measures of justice beliefs, followed by a laboratory-based social-evaluative stressor task. Two randomized fairness manipulations were implemented during the task: participants were given either high or low levels of distributive (outcome) and procedural (decision process) justice. Glucocorticoid (cortisol) and inflammatory (C-reactive protein) biological responses were measured in oral fluids, and attributions of racism were also measured. Results The hypothesized 3-way interaction was generally obtained. Among African Americans with a strong belief in justice, perceived racism, cortisol and C-reactive protein responses to low distributive justice were higher when procedural justice was low. Among African Americans with a weak belief in justice however, these responses were higher when a low level of distributive justice was coupled with high procedural justice. Conclusions Biological and psychological processes that contribute to cardiovascular health disparities are affected by consistency between individual-level and contextual justice factors. PMID:27018728
Ferromagnetic clusters induced by a nonmagnetic random disorder in diluted magnetic semiconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bui, Dinh-Hoi; Physics Department, Hue University’s College of Education, 34 Le Loi, Hue; Phan, Van-Nham, E-mail: phanvannham@dtu.edu.vn
In this work, we analyze the nonmagnetic random disorder leading to a formation of ferromagnetic clusters in diluted magnetic semiconductors. The nonmagnetic random disorder arises from randomness in the host lattice. Including the disorder to the Kondo lattice model with random distribution of magnetic dopants, the ferromagnetic–paramagnetic transition in the system is investigated in the framework of dynamical mean-field theory. At a certain low temperature one finds a fraction of ferromagnetic sites transiting to the paramagnetic state. Enlarging the nonmagnetic random disorder strength, the paramagnetic regimes expand resulting in the formation of the ferromagnetic clusters.
Chaos and random matrices in supersymmetric SYK
NASA Astrophysics Data System (ADS)
Hunter-Jones, Nicholas; Liu, Junyu
2018-05-01
We use random matrix theory to explore late-time chaos in supersymmetric quantum mechanical systems. Motivated by the recent study of supersymmetric SYK models and their random matrix classification, we consider the Wishart-Laguerre unitary ensemble and compute the spectral form factors and frame potentials to quantify chaos and randomness. Compared to the Gaussian ensembles, we observe the absence of a dip regime in the form factor and a slower approach to Haar-random dynamics. We find agreement between our random matrix analysis and predictions from the supersymmetric SYK model, and discuss the implications for supersymmetric chaotic systems.
Unifying model for random matrix theory in arbitrary space dimensions
NASA Astrophysics Data System (ADS)
Cicuta, Giovanni M.; Krausser, Johannes; Milkus, Rico; Zaccone, Alessio
2018-03-01
A sparse random block matrix model suggested by the Hessian matrix used in the study of elastic vibrational modes of amorphous solids is presented and analyzed. By evaluating some moments, benchmarked against numerics, differences in the eigenvalue spectrum of this model in different limits of space dimension d , and for arbitrary values of the lattice coordination number Z , are shown and discussed. As a function of these two parameters (and their ratio Z /d ), the most studied models in random matrix theory (Erdos-Renyi graphs, effective medium, and replicas) can be reproduced in the various limits of block dimensionality d . Remarkably, the Marchenko-Pastur spectral density (which is recovered by replica calculations for the Laplacian matrix) is reproduced exactly in the limit of infinite size of the blocks, or d →∞ , which clarifies the physical meaning of space dimension in these models. We feel that the approximate results for d =3 provided by our method may have many potential applications in the future, from the vibrational spectrum of glasses and elastic networks to wave localization, disordered conductors, random resistor networks, and random walks.
The current status of REH theory. [Random Evolutionary Hits in biological molecular evolution
NASA Technical Reports Server (NTRS)
Holmquist, R.; Jukes, T. H.
1981-01-01
A response is made to the evaluation of Fitch (1980) of REH (random evolutionary hits) theory for the evolutionary divergence of proteins and nucleic acids. Correct calculations for the beta hemoglobin mRNAs of the human, mouse and rabbit in the absence and presence of selective constraints are summarized, and it is shown that the alternative evolutionary analysis of Fitch underestimates the total fixed mutations. It is further shown that the model used by Fitch to test for the completeness of the count of total base substitutions is in fact a variant of REH theory. Considerations of the variance inherent in evolutionary estimations are also presented which show the REH model to produce no more variance than other evolutionary models. In the reply, it is argued that, despite the objections raised, REH theory applied to proteins gives inaccurate estimates of total gene substitutions. It is further contended that REH theory developed for nucleic sequences suffers from problems relating to the frequency of nucleotide substitutions, the identity of the codons accepting silent and amino acid-changing substitutions, and estimate uncertainties.
Critical exponents for diluted resistor networks
NASA Astrophysics Data System (ADS)
Stenull, O.; Janssen, H. K.; Oerding, K.
1999-05-01
An approach by Stephen [Phys. Rev. B 17, 4444 (1978)] is used to investigate the critical properties of randomly diluted resistor networks near the percolation threshold by means of renormalized field theory. We reformulate an existing field theory by Harris and Lubensky [Phys. Rev. B 35, 6964 (1987)]. By a decomposition of the principal Feynman diagrams, we obtain diagrams which again can be interpreted as resistor networks. This interpretation provides for an alternative way of evaluating the Feynman diagrams for random resistor networks. We calculate the resistance crossover exponent φ up to second order in ɛ=6-d, where d is the spatial dimension. Our result φ=1+ɛ/42+4ɛ2/3087 verifies a previous calculation by Lubensky and Wang, which itself was based on the Potts-model formulation of the random resistor network.
Horizon in Random Matrix Theory, the Hawking Radiation, and Flow of Cold Atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franchini, Fabio; Kravtsov, Vladimir E.
2009-10-16
We propose a Gaussian scalar field theory in a curved 2D metric with an event horizon as the low-energy effective theory for a weakly confined, invariant random matrix ensemble (RME). The presence of an event horizon naturally generates a bath of Hawking radiation, which introduces a finite temperature in the model in a nontrivial way. A similar mapping with a gravitational analogue model has been constructed for a Bose-Einstein condensate (BEC) pushed to flow at a velocity higher than its speed of sound, with Hawking radiation as sound waves propagating over the cold atoms. Our work suggests a threefold connectionmore » between a moving BEC system, black-hole physics and unconventional RMEs with possible experimental applications.« less
Liu, Hong; Zhu, Jingping; Wang, Kai
2015-08-24
The geometrical attenuation model given by Blinn was widely used in the geometrical optics bidirectional reflectance distribution function (BRDF) models. Blinn's geometrical attenuation model based on symmetrical V-groove assumption and ray scalar theory causes obvious inaccuracies in BRDF curves and negatives the effects of polarization. Aiming at these questions, a modified polarized geometrical attenuation model based on random surface microfacet theory is presented by combining of masking and shadowing effects and polarized effect. The p-polarized, s-polarized and unpolarized geometrical attenuation functions are given in their separate expressions and are validated with experimental data of two samples. It shows that the modified polarized geometrical attenuation function reaches better physical rationality, improves the precision of BRDF model, and widens the applications for different polarization.
Thakore, Bhoomi K; Naffziger-Hirsch, Michelle E; Richardson, Jennifer L; Williams, Simon N; McGee, Richard
2014-08-02
Approaches to training biomedical scientists have created a talented research community. However, they have failed to create a professional workforce that includes many racial and ethnic minorities and women in proportion to their representation in the population or in PhD training. This is particularly true at the faculty level. Explanations for the absence of diversity in faculty ranks can be found in social science theories that reveal processes by which individuals develop identities, experiences, and skills required to be seen as legitimate within the profession. Using the social science theories of Communities of Practice, Social Cognitive Career Theory, identity formation, and cultural capital, we have developed and are testing a novel coaching-based model to address some of the limitations of previous diversity approaches. This coaching intervention (The Academy for Future Science Faculty) includes annual in-person meetings of students and trained faculty Career Coaches, along with ongoing virtual coaching, group meetings and communication. The model is being tested as a randomized controlled trial with two cohorts of biomedical PhD students from across the U.S., one recruited at the start of their PhDs and one nearing completion. Stratification into the experimental and control groups, and to coaching groups within the experimental arms, achieved equal numbers of students by race, ethnicity and gender to the extent possible. A fundamental design element of the Academy is to teach and make visible the social science principles which highly influence scientific advancement, as well as acknowledging the extra challenges faced by underrepresented groups working to be seen as legitimate within the scientific communities. The strategy being tested is based upon a novel application of the well-established principles of deploying highly skilled coaches, selected and trained for their ability to develop talents of others. This coaching model is intended to be a complement, rather than a substitute, for traditional mentoring in biomedical research training, and is being tested as such.
Sojourning with the Homogeneous Poisson Process.
Liu, Piaomu; Peña, Edsel A
2016-01-01
In this pedagogical article, distributional properties, some surprising, pertaining to the homogeneous Poisson process (HPP), when observed over a possibly random window, are presented. Properties of the gap-time that covered the termination time and the correlations among gap-times of the observed events are obtained. Inference procedures, such as estimation and model validation, based on event occurrence data over the observation window, are also presented. We envision that through the results in this paper, a better appreciation of the subtleties involved in the modeling and analysis of recurrent events data will ensue, since the HPP is arguably one of the simplest among recurrent event models. In addition, the use of the theorem of total probability, Bayes theorem, the iterated rules of expectation, variance and covariance, and the renewal equation could be illustrative when teaching distribution theory, mathematical statistics, and stochastic processes at both the undergraduate and graduate levels. This article is targeted towards both instructors and students.
Büchi, S; Straub, S; Schwager, U
2010-12-01
Although there is much talk about shared decision making and individualized goal setting, there is a lack of knowledge and knowhow in their realization in daily clinical practice. There is a lack in tools for easy applicable tools to ameliorate person-centred individualized goal setting processes. In three selected psychiatric inpatients the semistructured, theory driven use of PRISM (Pictorial Representation of Illness and Self Measure) in patients with complex psychiatric problems is presented and discussed. PRISM sustains a person-centred individualized process of goal setting and treatment and reinforces the active participation of patients. The process of visualisation and synchronous documentation is validated positively by patients and clinicians. The visual goal setting requires 30 to 45 minutes. In patients with complex psychiatric illness PRISM was used successfully to ameliorate individual goal setting. Specific effects of PRISM-visualisation are actually evaluated in a randomized controlled trial.
Giesbrecht, Edward M; Miller, William C; Eng, Janice J; Mitchell, Ian M; Woodgate, Roberta L; Goldsmith, Charles H
2013-10-24
Many older adults rely on a manual wheelchair for mobility but typically receive little, if any, training on how to use their wheelchair effectively and independently. Standardized skill training is an effective intervention, but limited access to clinician trainers is a substantive barrier. Enhancing Participation in the Community by Improving Wheelchair Skills (EPIC Wheels) is a 1-month monitored home training program for improving mobility skills in older novice manual wheelchair users, integrating principles from andragogy and social cognitive theory. The purpose of this study is to determine whether feasibility indicators and primary clinical outcome measures of the EPIC Wheels program are sufficiently robust to justify conducting a subsequent multi-site randomized controlled trial. A 2 × 2 factorial randomized controlled trial at two sites will compare improvement in wheelchair mobility skills between an EPIC Wheels treatment group and a computer-game control group, with additional wheelchair use introduced as a second factor. A total of 40 community-dwelling manual wheelchair users at least 55 years old and living in two Canadian metropolitan cities (n = 20 × 2) will be recruited. Feasibility indicators related to study process, resources, management, and treatment issues will be collected during data collection and at the end of the study period, and evaluated against proposed criteria. Clinical outcome measures will be collected at baseline (pre-randomization) and post-intervention. The primary clinical outcome measure is wheelchair skill capacity, as determined by the Wheelchair Skills Test, version 4.1. Secondary clinical outcome measures include wheelchair skill safety, satisfaction with performance, wheelchair confidence, life-space mobility, divided-attention, and health-related quality of life. The EPIC Wheels training program offers several innovative features. The convenient, portable, economical, and adaptable tablet-based, home program model for wheelchair skills training has great potential for clinical uptake and opportunity for future enhancements. Theory-driven design can foster learning and adherence for older adults. Establishing the feasibility of the study protocol and estimating effect size for the primary clinical outcome measure will be used to develop a multi-site randomized controlled trial to test the guiding hypotheses. Clinical Trials NCT01740635.
Zeinab, Jalambadani; Gholamreza, Garmaroudi; Mehdi, Yaseri; Mahmood, Tavousi; Korush, Jafarian
2017-01-01
Background The Trans-Theoretical model (TTM) and Theory of Planned Behaviour (TPB) may be promising models for understanding and predicting reduction in the consumption of fast food. The aim of this study was to examine the applicability of the Trans-Theoretical model (TTM) and the additional predictive role of the subjective norms and perceived behavioural control in predicting reduction consumption of fast food in obese Iranian adolescent girls. Materials and Methods. A cross sectional study design was conducted among twelve randomly selected schools in Sabzevar, Iran from 2015 to 2017. Four hundred eighty five randomly selected students consented to participate in the study. Hierarchical regression models used to predict the role of important variables that can influence the reduction in the consumption of fast food among students. using SPSS version 22. Results Variables Perceived behavioural control (r=0.58, P<0.001), Subjective norms (r=0.51, P<0.001), self-efficacy (r=0.49, P<0.001), decisional balance (pros) (r=0.29, P<0.001), decisional balance (cons) (r=0.25, P<0.001), stage of change (r=0.38, P<0.001), were significantly and positively correlated while experiential processes of change (r=0.08, P=0.135) and behavioural processes of change (r=0.09, P=0.145), were not significant. Conclusions The study demonstrated that the TTM (except the experiential and behavioural processes of change) focusing on the perceived behavioural control and subjective norms are useful models for reduction in the consumption of fast food. Significance for public health The Ministries of Education and Public Health should cooperate in supporting the below-mentioned formal and non-formal school, family and community nutritional education and activities. Lastly, the Ministry of Public Health should conduct programmes with restaurant owners on healthy Iranian food and its hygienic presentation and promotion, to enhance their ability to compete with fast-food restaurants. PMID:29071252
Changing Beliefs about Trauma: A Qualitative Study of Cognitive Processing Therapy.
Price, Jennifer L; MacDonald, Helen Z; Adair, Kathryn C; Koerner, Naomi; Monson, Candice M
2016-03-01
Controlled qualitative methods complement quantitative treatment outcome research and enable a more thorough understanding of the effects of therapy and the suspected mechanisms of action. Thematic analyses were used to examine outcomes of cognitive processing therapy (CPT) for posttraumatic stress disorder (PTSD) in a randomized controlled trial of individuals diagnosed with military-related PTSD (n = 15). After sessions 1 and 11, participants wrote "impact statements" describing their appraisals of their trauma and beliefs potentially impacted by traumatic events. Trained raters coded each of these statements using a thematic coding scheme. An analysis of thematic coding revealed positive changes over the course of therapy in participants' perspective on their trauma and their future, supporting the purported mechanisms of CPT. Implications of this research for theory and clinical practice are discussed.
Learning Time-Varying Coverage Functions
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2015-01-01
Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data. PMID:25960624
Learning Time-Varying Coverage Functions.
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2014-12-08
Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data.
NASA Astrophysics Data System (ADS)
Laptev, A. G.; Basharov, M. M.; Farakhova, A. I.
2013-09-01
The process through which small droplets contained in emulsions are physically coagulated on the surface of random packing elements is considered. The theory of turbulent migration of a finely dispersed phase is used for determining the coagulation efficiency. Expressions for calculating coagulation efficiency and turbulent transfer rate are obtained by applying models of a turbulent boundary layer. An example of calculating the enlargement of water droplets in hydrocarbon medium represented by a wide fraction of light hydrocarbons (also known as natural gas liquid) is given. The process flowchart of a system for removing petroleum products from effluent waters discharged from the Kazan TETs-1 cogeneration station is considered. Replacement of the mechanical filter by a thin-layer settler with a coagulator is proposed.
Chibanda, Dixon; Verhey, Ruth; Munetsi, Epiphany; Cowan, Frances M; Lund, Crick
2016-01-01
There is a paucity of data on how to deliver complex interventions that seek to reduce the treatment gap for mental disorders, particularly in sub-Saharan Africa. The need for well-documented protocols which clearly describe the development and the scale-up of programs and interventions is necessary if such interventions are to be replicated elsewhere. This article describes the use of a theory of change (ToC) model to develop a brief psychological intervention for common mental disorders and its' evaluation through a cluster randomized controlled trial in Zimbabwe. A total of eight ToC workshops were held with a range of stakeholders over a 6-month period with a focus on four key components of the program: formative work, piloting, evaluation and scale-up. A ToC map was developed as part of the process with defined causal pathways leading to the desired impact. Interventions, indicators, assumptions and rationale for each point along the causal pathway were considered. Political buy-in from stakeholders together with key resources, which included human, facility/infrastructure, communication and supervision were identified as critical needs using the ToC approach. Ten (10) key interventions with specific indicators, assumptions and rationale formed part of the final ToC map, which graphically illustrated the causal pathway leading to the development of a psychological intervention and the successful implementation of a cluster randomized controlled trial. ToC workshops can enhance stakeholder engagement through an iterative process leading to a shared vision that can improve outcomes of complex mental health interventions particularly where scaling up of the intervention is desired.
Huang, Ping; Tan, Shanzhong; Zhang, Yong-xin; Li, Jun-song; Chai, Chuan; Li, Jin-ji; Cai, Bao-chang
2014-08-08
Ascending and descending theory is a core principle of traditional Chinese medicine (TCM) theories. It plays an essential role in TCM clinical applications. Some TCM medicine has specific properties, which could alter the inclination and direction of their actions. The properties of the ascending and floating process of one herbal medicine are affected by means of herb processing. Wine-processing, which is sautéing with rice wine, is one of the most popular technologies of herb processing. Wine-processing increases the inclination and direction of its actions, thereby producing or strengthening their efficacy in cleaning the upper-energizer heat. Radix scutellariae, the dried roots of Scutellaria baicalensis Georgi, is a well-known TCM used for the treatment of inflammation, pyrexia, jaundice, etc. Recently, wine-processed Radix scutellariae was normally applied in clinical studies for the treatment of upper-energizer syndrome. In order to investigate the effects of wine-processing on ascending and descending of Radix scutellariae, the comparative study of distribution of flavonoids in rat tissues of triple energizers (SanJiao-upper, middle, lower jiao) after oral administration of crude and wine-processed Radix scutellariae aqueous extracts was carried out. The rats were randomly assigned to two groups and orally administered with crude and wine-processed Radix scutellariae aqueous extracts, respectively. At different pre-determined time points after administration, the concentrations of compounds in rat tissue homogenate were determined, and the main tissue pharmacokinetic parameters were investigated. Tissue pharmacokinetic parameters including AUC0-t, t1/2, Tmax and Cmax were calculated using DAS 2.0. An unpaired Student t-test was used to compare the differences in tissue pharmacokinetic parameters between the two groups. All the results were expressed as arithmetic mean±S.D. The parameters of Cmax and AUC0-t of some flavonoids in wine-processed Radix scutellariae were remarkably increased (p<0.05, p<0.01, p<0.001) in the rat upper-energizer tissues (lung and heart) compared with those of the crude group. However, in the rat middle- and lower-energizer tissues (spleen, liver and kidney), the Cmax and AUC0-t of some flavonoids were significantly decreased (p<0.05, p<0.01) compared with the crude group. The main explanation for these differences seems to the effects of wine-processing on ascending and descending theory. All of these differences in the distribution of triple energizers after oral administration of crude and wine-processed Radix scutellariae aqueous extracts may lead to the increase of efficacy on the upper-energizer tissues and were in compliance with the ascending and descending theory. Therefore, wine-processing was recommended when Radix scutellariae was used for cleaning the upper-energizer heat and humidity. The obtained knowledge can be used to evaluate the impact of these differences on the efficacy of both the drugs in clinical applications and might be helpful in explaining the effects of wine-processing on ascending and descending theory. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal
NASA Astrophysics Data System (ADS)
Zamudio, Gabriel S.; José, Marco V.
2018-03-01
In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.
NASA Astrophysics Data System (ADS)
Shimada, Yutaka; Ikeguchi, Tohru; Shigehara, Takaomi
2012-10-01
In this Letter, we propose a framework to transform a complex network to a time series. The transformation from complex networks to time series is realized by the classical multidimensional scaling. Applying the transformation method to a model proposed by Watts and Strogatz [Nature (London) 393, 440 (1998)], we show that ring lattices are transformed to periodic time series, small-world networks to noisy periodic time series, and random networks to random time series. We also show that these relationships are analytically held by using the circulant-matrix theory and the perturbation theory of linear operators. The results are generalized to several high-dimensional lattices.
Automated mapping of the ocean floor using the theory of intrinsic random functions of order k
David, M.; Crozel, D.; Robb, James M.
1986-01-01
High-quality contour maps can be computer drawn from single track echo-sounding data by combining Universal Kriging and the theory of intrinsic random function of order K (IRFK). These methods interpolate values among the closely spaced points that lie along relatively widely spaced lines. The technique provides a variance which can be contoured as a quantitative measure of map precision. The technique can be used to evaluate alternative survey trackline configurations and data collection intervals, and can be applied to other types of oceanographic data. ?? 1986 D. Reidel Publishing Company.
Akemann, G; Bloch, J; Shifrin, L; Wettig, T
2008-01-25
We analyze how individual eigenvalues of the QCD Dirac operator at nonzero quark chemical potential are distributed in the complex plane. Exact and approximate analytical results for both quenched and unquenched distributions are derived from non-Hermitian random matrix theory. When comparing these to quenched lattice QCD spectra close to the origin, excellent agreement is found for zero and nonzero topology at several values of the quark chemical potential. Our analytical results are also applicable to other physical systems in the same symmetry class.
Random matrix theory and portfolio optimization in Moroccan stock exchange
NASA Astrophysics Data System (ADS)
El Alaoui, Marwane
2015-09-01
In this work, we use random matrix theory to analyze eigenvalues and see if there is a presence of pertinent information by using Marčenko-Pastur distribution. Thus, we study cross-correlation among stocks of Casablanca Stock Exchange. Moreover, we clean correlation matrix from noisy elements to see if the gap between predicted risk and realized risk would be reduced. We also analyze eigenvectors components distributions and their degree of deviations by computing the inverse participation ratio. This analysis is a way to understand the correlation structure among stocks of Casablanca Stock Exchange portfolio.
Duong, Manh Hong; Han, The Anh
2016-12-01
In this paper, we study the distribution and behaviour of internal equilibria in a d-player n-strategy random evolutionary game where the game payoff matrix is generated from normal distributions. The study of this paper reveals and exploits interesting connections between evolutionary game theory and random polynomial theory. The main contributions of the paper are some qualitative and quantitative results on the expected density, [Formula: see text], and the expected number, E(n, d), of (stable) internal equilibria. Firstly, we show that in multi-player two-strategy games, they behave asymptotically as [Formula: see text] as d is sufficiently large. Secondly, we prove that they are monotone functions of d. We also make a conjecture for games with more than two strategies. Thirdly, we provide numerical simulations for our analytical results and to support the conjecture. As consequences of our analysis, some qualitative and quantitative results on the distribution of zeros of a random Bernstein polynomial are also obtained.
Random walk to a nonergodic equilibrium concept
NASA Astrophysics Data System (ADS)
Bel, G.; Barkai, E.
2006-01-01
Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.
Zhang, Duan Z.; Padrino, Juan C.
2017-06-01
The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt $-$1/4 rather than xt $-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less
Modelling of Rail Vehicles and Track for Calculation of Ground-Vibration Transmission Into Buildings
NASA Astrophysics Data System (ADS)
Hunt, H. E. M.
1996-05-01
A methodology for the calculation of vibration transmission from railways into buildings is presented. The method permits existing models of railway vehicles and track to be incorporated and it has application to any model of vibration transmission through the ground. Special attention is paid to the relative phasing between adjacent axle-force inputs to the rail, so that vibration transmission may be calculated as a random process. The vehicle-track model is used in conjunction with a building model of infinite length. The tracking and building are infinite and parallel to each other and forces applied are statistically stationary in space so that vibration levels at any two points along the building are the same. The methodology is two-dimensional for the purpose of application of random process theory, but fully three-dimensional for calculation of vibration transmission from the track and through the ground into the foundations of the building. The computational efficiency of the method will interest engineers faced with the task of reducing vibration levels in buildings. It is possible to assess the relative merits of using rail pads, under-sleeper pads, ballast mats, floating-slab track or base isolation for particular applications.
Jabbour, Mona; Curran, Janet; Scott, Shannon D; Guttman, Astrid; Rotter, Thomas; Ducharme, Francine M; Lougheed, M Diane; McNaughton-Filion, M Louise; Newton, Amanda; Shafir, Mark; Paprica, Alison; Klassen, Terry; Taljaard, Monica; Grimshaw, Jeremy; Johnson, David W
2013-05-22
The clinical pathway is a tool that operationalizes best evidence recommendations and clinical practice guidelines in an accessible format for 'point of care' management by multidisciplinary health teams in hospital settings. While high-quality, expert-developed clinical pathways have many potential benefits, their impact has been limited by variable implementation strategies and suboptimal research designs. Best strategies for implementing pathways into hospital settings remain unknown. This study will seek to develop and comprehensively evaluate best strategies for effective local implementation of externally developed expert clinical pathways. We will develop a theory-based and knowledge user-informed intervention strategy to implement two pediatric clinical pathways: asthma and gastroenteritis. Using a balanced incomplete block design, we will randomize 16 community emergency departments to receive the intervention for one clinical pathway and serve as control for the alternate clinical pathway, thus conducting two cluster randomized controlled trials to evaluate this implementation intervention. A minimization procedure will be used to randomize sites. Intervention sites will receive a tailored strategy to support full clinical pathway implementation. We will evaluate implementation strategy effectiveness through measurement of relevant process and clinical outcomes. The primary process outcome will be the presence of an appropriately completed clinical pathway on the chart for relevant patients. Primary clinical outcomes for each clinical pathway include the following: Asthma--the proportion of asthmatic patients treated appropriately with corticosteroids in the emergency department and at discharge; and Gastroenteritis--the proportion of relevant patients appropriately treated with oral rehydration therapy. Data sources include chart audits, administrative databases, environmental scans, and qualitative interviews. We will also conduct an overall process evaluation to assess the implementation strategy and an economic analysis to evaluate implementation costs and benefits. This study will contribute to the body of evidence supporting effective strategies for clinical pathway implementation, and ultimately reducing the research to practice gaps by operationalizing best evidence care recommendations through effective use of clinical pathways. ClinicalTrials.gov: NCT01815710.
Mirkarimi, Kamal; Eri, Maryam; Ghanbari, Mohammad R; Kabir, Mohammad J; Raeisi, Mojtaba; Ozouni-Davaji, Rahman B; Aryaie, Mohammad; Charkazi, Abdurrahman
2017-10-30
We were guided by the Protection Motivation Theory to test the motivational interviewing effects on attitude and intention of obese and overweight women to do regular physical activity. In a randomized controlled trial, we selected using convenience sampling 60 overweight and obese women attending health centres. The women were allocated to 2 groups of 30 receiving a standard weight-control programme or motivational interviewing. All constructs of the theory (perceived susceptibility, severity, self-efficacy and response efficacy) and all anthropometric characteristics (except body mass index) were significantly different between the groups at 3 study times. The strongest predictors of intention to do regular physical exercise were perceived response efficacy and attitude at 2- and 6-months follow-up. We showed that targeting motivational interviewing with an emphasis on Protection Motivation Theory constructs appeared to be beneficial for designing and developing appropriate intervention to improve physical activity status among women with overweight and obesity.
Magnetic pumping of the solar wind
NASA Astrophysics Data System (ADS)
Egedal, Jan; Lichko, Emily; Daughton, William
2015-11-01
The transport of matter and radiation in the solar wind and terrestrial magnetosphere is a complicated problem involving competing processes of charged particles interacting with electric and magnetic fields. Given the rapid expansion of the solar wind, it would be expected that superthermal electrons originating in the corona would cool rapidly as a function of distance to the Sun. However, this is not observed, and various models have been proposed as candidates for heating the solar wind. In the compressional pumping mechanism explored by Fisk and Gloeckler particles are accelerated by random compressions by the interplanetary wave turbulence. This theory explores diffusion due to spatial non-uniformities and provides a mechanism for redistributing particle. For investigation of a related but different heating mechanism, magnetic pumping, in our work we include diffusion of anisotropic features that develops in velocity space. The mechanism allows energy to be transferred to the particles directly from the turbulence. Guided by kinetic simulations a theory is derived for magnetic pumping. At the heart of this work is a generalization of the Parker Equation to capture the role of the pressure anisotropy during the pumping process. Supported by NASA grant NNX15AJ73G.
NASA Astrophysics Data System (ADS)
Zaenudin; Maknun, J.; Muslim
2017-03-01
This study aims to determine description of self -efficacy and initial cognitive abilities on the students of MAN 1 Bandung (senior high school) in learning physics on the subject of electrical circuits Direct Current (DC) before they get academy ask assigned in the classroom. From the results of this research can be used as a reference to provide appropriate measures for the advancement of student learning. The theory used in this research is the theory of Bandura. The design in this study using case study and data collection is done by tests and questionnaires, sampling techniques used by random sampling, the study was conducted on 10th grade students of MAN 1 Bandung by the amount of students 35 participants. The results of data analysis showed that the percentage of students who have moderate self-efficacy amounted to 67.05 %, and cognitive ability 50 %, this shows that the process of learning that takes place in school before that junior high school is not much scientific implement processes that provide students the opportunity to discover new things, then learning approaches of right is Problem Based Learning (PBL).
NASA Astrophysics Data System (ADS)
Egli, Ramon; Zhao, Xiangyu
2015-04-01
We present a general theory on the acquisition of natural remanent magnetizations (NRM) in sediment under the influence of (a) magnetic torques, (b) randomizing torques (e.g. from bioturbation), and (c) torques resulting from interaction forces between remanence carriers and other particles. Dynamic equilibrium between (a) and (b) in the water column and sediment-water interface produce a detrital remanent magnetization (DRM), while much stronger randomizing forces occur in the mixed layer of sediment due to bioturbation forces. These generate a so-called mixing remanent magnetization (MRM), which is stabilized by interaction forces. During the time required to cross the mixed layer, DRM is lost and MRM is acquired at a rate that depends on bioturbation intensity. Both processes are governed by the same MRM lock-in function. The final NRM intensity is controlled mainly by a single parameter defined as the product of rotational diffusion constant and mixed layer thickness, divided by the sedimentation rate. This parameter defines three regimes: (1) slow mixing, leading to DRM preservation and insignificant MRM acquisition, (2) fast mixing with MRM acquisition and full randomization of the original DRM, and (3) intermediate mixing. Because the acquisition efficiency of DRM is expectedly larger than that of a MRM, MRM is particularly sensitive to the mixing rate in case of intermediate regimes, and generates variable NRM acquisition efficiencies. Our model explains (1) lock-in delays that can be matched with empirical reconstructions from paleomagnetic records, (2) the existence of small lock-in depths leading to DRM preservation, (3) NRM acquisition efficiencies of magnetofossil-rich sediments, and (4) relative paleointensity artifacts reported in some recent studies.
Beller Lectureship Talk: Active response of biological cells to mechanical stress
NASA Astrophysics Data System (ADS)
Safran, Samuel
2009-03-01
Forces exerted by and on adherent cells are important for many physiological processes such as wound healing and tissue formation. In addition, recent experiments have shown that stem cell differentiation is controlled, at least in part, by the elasticity of the surrounding matrix. We present a simple and generic theoretical model for the active response of biological cells to mechanical stress. The theory includes cell activity and mechanical forces as well as random forces as factors that determine the polarizability that relates cell orientation to stress. This allows us to explain the puzzling observation of parallel (or sometimes random) alignment of cells for static and quasi-static stresses and of nearly perpendicular alignment for dynamically varying stresses. In addition, we predict the response of the cellular orientation to a sinusoidally varying applied stress as a function of frequency and compare the theory with recent experiments. The dependence of the cell orientation angle on the Poisson ratio of the surrounding material distinguishes cells whose activity is controlled by stress from those controlled by strain. We have extended the theory to generalize the treatment of elastic inclusions in solids to ''living'' inclusions (cells) whose active polarizability, analogous to the polarizability of non-living matter, results in the feedback of cellular forces that develop in response to matrix stresses. We use this to explain recent observations of the non-monotonic dependence of stress-fiber polarization in stem cells on matrix rigidity. These findings provide a mechanical correlate for the existence of an optimal substrate elasticity for cell differentiation and function. [3pt] *In collaboration with R. De (Brown University), Y. Biton (Weizmann Institute), and A. Zemel (Hebrew University) and the experimental groups: Max Planck Institute, Stuttgart: S. Jungbauer, R. Kemkemer, J. Spatz; University of Pennsylvania: A. Brown, D. Discher, F. Rehfeldt.
NASA Astrophysics Data System (ADS)
Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.; Hardelauf, H.
2003-04-01
When the small scale transport is modeled by a Wiener process and the large scale heterogeneity by a random velocity field, the effective coefficients, Deff, can be decomposed as sums between the local coefficient, D, a contribution of the random advection, Dadv, and a contribution of the randomness of the trajectory of plume center of mass, Dcm: Deff=D+Dadv-Dcm. The coefficient Dadv is similar to that introduced by Taylor in 1921, and more recent works associate it with the thermodynamic equilibrium. The ``ergodic hypothesis'' says that over large time intervals Dcm vanishes and the effect of the heterogeneity is described by Dadv=Deff-D. In this work we investigate numerically the long time behavior of the effective coefficients as well as the validity of the ergodic hypothesis. The transport in every realization of the velocity field is modeled with the Global Random Walk Algorithm, which is able to track as many particles as necessary to achieve a statistically reliable simulation of the process. Averages over realizations are further used to estimate mean coefficients and standard deviations. In order to remain in the frame of most of the theoretical approaches, the velocity field was generated in a linear approximation and the logarithm of the hydraulic conductivity was taken to be exponential decaying correlated with variance equal to 0.1. Our results show that even in these idealized conditions, the effective coefficients tend to asymptotic constant values only when the plume travels thousands of correlations lengths (while the first order theories usually predict Fickian behavior after tens of correlations lengths) and that the ergodicity conditions are still far from being met.
Recognition and processing of randomly fluctuating electric signals by Na,K-ATPase.
Xie, T. D.; Marszalek, P.; Chen, Y. D.; Tsong, T. Y.
1994-01-01
Previous work has shown that Na,K-ATPase of human erythrocytes can extract free energy from sinusoidal electric fields to pump cations up their respective concentration gradients. Because regularly oscillating waveform is not a feature of the transmembrane electric potential of cells, questions have been raised whether these observed effects are biologically relevant. Here we show that a random-telegraph fluctuating electric field (RTF) consisting of alternating square electric pulses with random lifetimes can also stimulate the Rb(+)-pumping mode of the Na,K-ATPase. The net RTF-stimulated, ouabain-sensitive Rb+ pumping was monitored with 86Rb+. The tracer-measured, Rb+ influx exhibited frequency and amplitude dependencies that peaked at the mean frequency of 1.0 kHz and amplitude of 20 V/cm. At 4 degrees C, the maximal pumping activity under these optimal conditions was 28 Rb+/RBC-hr, which is approximately 50% higher than that obtained with the sinusoidal electric field. These findings indicate that Na,K-ATPase can recognize an electric signal, either regularly oscillatory or randomly fluctuating, for energy coupling, with high fidelity. The use of RTF for activation also allowed a quantitative theoretical analysis of kinetics of a membrane transport model of any complexity according to the theory of electroconformational coupling (ECC) by the diagram methods. A four-state ECC model was shown to produce the amplitude and the frequency windows of the Rb(+)-pumping if the free energy of interaction of the transporter with the membrane potential was to include a nonlinear quadratic term. Kinetic constants for the ECC model have been derived. These results indicate that the ECC is a plausible mechanism for the recognition and processing of electric signals by proteins of the cell membrane. PMID:7811939
Complex behaviour and predictability of the European dry spell regimes
NASA Astrophysics Data System (ADS)
Lana, X.; Martínez, M. D.; Serra, C.; Burgueño, A.
2010-09-01
The complex spatial and temporal characteristics of European dry spell lengths, DSL, (sequences of consecutive days with rainfall amount below a certain threshold) and their randomness and predictive instability are analysed from daily pluviometric series recorded at 267 rain gauges along the second half of the 20th century. DSL are obtained by considering four thresholds, R0, of 0.1, 1.0, 5.0 and 10.0 mm/day. A proper quantification of the complexity, randomness and predictive instability of the different DSL regimes in Europe is achieved on the basis of fractal analyses and dynamic system theory, including the reconstruction theorem. First, the concept of lacunarity is applied to the series of daily rainfall, and the lacunarity curves are well fitted to Cantor and random Cantor sets. Second, the rescaled analysis reveals that randomness, persistence and anti-persistence are present on the European DSL series. Third, the complexity of the physical process governing the DSL series is quantified by the minimum number of nonlinear equations determined by the correlation dimension. And fourth, the loss of memory of the physical process, which is one of the reasons for the complex predictability, is characterized by the values of the Kolmogorov entropy, and the predictive instability is directly associated with positive Lyapunov exponents. In this way, new bases for a better prediction of DSLs in Europe, sometimes leading to drought episodes, are established. Concretely, three predictive strategies are proposed in Sect. 5. It is worth mentioning that the spatial distribution of all fractal parameters does not solely depend on latitude and longitude but also reflects the effects of orography, continental climate or vicinity to the Atlantic and Arctic Oceans and Mediterranean Sea.
Changing climates of conflict: A social network experiment in 56 schools.
Paluck, Elizabeth Levy; Shepherd, Hana; Aronow, Peter M
2016-01-19
Theories of human behavior suggest that individuals attend to the behavior of certain people in their community to understand what is socially normative and adjust their own behavior in response. An experiment tested these theories by randomizing an anticonflict intervention across 56 schools with 24,191 students. After comprehensively measuring every school's social network, randomly selected seed groups of 20-32 students from randomly selected schools were assigned to an intervention that encouraged their public stance against conflict at school. Compared with control schools, disciplinary reports of student conflict at treatment schools were reduced by 30% over 1 year. The effect was stronger when the seed group contained more "social referent" students who, as network measures reveal, attract more student attention. Network analyses of peer-to-peer influence show that social referents spread perceptions of conflict as less socially normative.
Temporal evolution of financial-market correlations.
Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
On the distribution of a product of N Gaussian random variables
NASA Astrophysics Data System (ADS)
Stojanac, Željka; Suess, Daniel; Kliesch, Martin
2017-08-01
The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.
NASA Technical Reports Server (NTRS)
Gatlin, L. L.
1974-01-01
Concepts of information theory are applied to examine various proteins in terms of their redundancy in natural originators such as animals and plants. The Monte Carlo method is used to derive information parameters for random protein sequences. Real protein sequence parameters are compared with the standard parameters of protein sequences having a specific length. The tendency of a chain to contain some amino acids more frequently than others and the tendency of a chain to contain certain amino acid pairs more frequently than other pairs are used as randomness measures of individual protein sequences. Non-periodic proteins are generally found to have random Shannon redundancies except in cases of constraints due to short chain length and genetic codes. Redundant characteristics of highly periodic proteins are discussed. A degree of periodicity parameter is derived.
A multiple scattering theory for EM wave propagation in a dense random medium
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.; Wong, K. W.
1985-01-01
For a dense medium of randomly distributed scatterers an integral formulation for the total coherent field has been developed. This formulation accounts for the multiple scattering of electromagnetic waves including both the twoand three-particle terms. It is shown that under the Markovian assumption the total coherent field and the effective field have the same effective wave number. As an illustration of this theory, the effective wave number and the extinction coefficient are derived in terms of the polarizability tensor and the pair distribution function for randomly distributed small spherical scatterers. It is found that the contribution of the three-particle term increases with the particle size, the volume fraction, the frequency and the permittivity of the particle. This increase is more significant with frequency and particle size than with other parameters.
Temporal evolution of financial-market correlations
NASA Astrophysics Data System (ADS)
Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
Parallel capillary-tube-based extension of thermoacoustic theory for random porous media.
Roh, Heui-Seol; Raspet, Richard; Bass, Henry E
2007-03-01
Thermoacoustic theory is extended to stacks made of random bulk media. Characteristics of the porous stack such as the tortuosity and dynamic shape factors are introduced into the thermoacoustic wave equation in the low reduced frequency approximation. Basic thermoacoustic equations for a bulk porous medium are formulated analogously to the equations for a single pore. Use of different dynamic shape factors for the viscous and thermal effects is adopted and scaling using the dynamic shape factors and tortuosity is demonstrated. Comparisons of the calculated and experimentally derived thermoacoustic properties of reticulated vitreous carbon and aluminum foam show good agreement. A consistent mathematical model of sound propagation in a random porous medium with an imposed temperature is developed. This treatment leads to an expression for the coefficient of the temperature gradient in terms of scaled cylindrical thermoviscous functions.
NASA Astrophysics Data System (ADS)
Lopes, Artur O.; Neumann, Adriana
2015-05-01
In the present paper, we consider a family of continuous time symmetric random walks indexed by , . For each the matching random walk take values in the finite set of states ; notice that is a subset of , where is the unitary circle. The infinitesimal generator of such chain is denoted by . The stationary probability for such process converges to the uniform distribution on the circle, when . Here we want to study other natural measures, obtained via a limit on , that are concentrated on some points of . We will disturb this process by a potential and study for each the perturbed stationary measures of this new process when . We disturb the system considering a fixed potential and we will denote by the restriction of to . Then, we define a non-stochastic semigroup generated by the matrix , where is the infinifesimal generator of . From the continuous time Perron's Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on . This new chain is called the continuous time Gibbs state associated to the potential , see (Lopes et al. in J Stat Phys 152:894-933, 2013). The stationary probability vector for such Markov Chain is denoted by . We assume that the maximum of is attained in a unique point of , and from this will follow that . Thus, here, our main goal is to analyze the large deviation principle for the family , when . The deviation function , which is defined on , will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan's Lemma for the process . For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when . Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.
NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
NASA Technical Reports Server (NTRS)
Dlugach, Janna M.; Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.
2011-01-01
Direct computer simulations of electromagnetic scattering by discrete random media have become an active area of research. In this progress review, we summarize and analyze our main results obtained by means of numerically exact computer solutions of the macroscopic Maxwell equations. We consider finite scattering volumes with size parameters in the range, composed of varying numbers of randomly distributed particles with different refractive indices. The main objective of our analysis is to examine whether all backscattering effects predicted by the low-density theory of coherent backscattering (CB) also take place in the case of densely packed media. Based on our extensive numerical data we arrive at the following conclusions: (i) all backscattering effects predicted by the asymptotic theory of CB can also take place in the case of densely packed media; (ii) in the case of very large particle packing density, scattering characteristics of discrete random media can exhibit behavior not predicted by the low-density theories of CB and radiative transfer; (iii) increasing the absorptivity of the constituent particles can either enhance or suppress typical manifestations of CB depending on the particle packing density and the real part of the refractive index. Our numerical data strongly suggest that spectacular backscattering effects identified in laboratory experiments and observed for a class of high-albedo Solar System objects are caused by CB.
Salimzadeh, Hamideh; Eftekhar, Hassan; Majdzadeh, Reza; Montazeri, Ali; Delavari, Alireza
2014-10-01
Colorectal cancer is the third most commonly diagnosed cancer and the fourth leading cause of death in the world. There are few published studies that have used theory-based interventions designed to increase colorectal cancer screening in community lay health organizations. The present study was guided by the theoretical concepts of the preventive health model. Twelve health clubs of a municipal district in Tehran were randomized to two study groups with equal ratio. The control group received usual services throughout the study while the intervention group also received a theory-based educational program on colorectal cancer screening plus a reminder call. Screening behavior, the main outcome, was assessed 4 months after randomization. A total of 360 members aged 50 and older from 12 health clubs completed a baseline survey. Participants in the intervention group reported increased knowledge of colorectal cancer and screening tests at 4 months follow-up (p's < .001). Moreover, exposure to the theory-based intervention significantly improved self-efficacy, perceived susceptibility, efficacy of screening, social support, and intention to be screened for colorectal cancer, from baseline to 4 months follow-up (p's < .001). The screening rate for colorectal cancer was significantly higher in the intervention group compared to the control group (odds ratio = 15.93, 95% CI = 5.57, 45.53). Our theory-based intervention was found to have a significant effect on colorectal cancer screening use as measured by self-report. The findings could have implications for colorectal cancer screening program development and implementation in primary health care settings and through other community organizations.
Aspects géométriques et intégrables des modèles de matrices aléatoires
NASA Astrophysics Data System (ADS)
Marchal, Olivier
2010-12-01
This thesis deals with the geometric and integrable aspects associated with random matrix models. Its purpose is to provide various applications of random matrix theory, from algebraic geometry to partial differential equations of integrable systems. The variety of these applications shows why matrix models are important from a mathematical point of view. First, the thesis will focus on the study of the merging of two intervals of the eigenvalues density near a singular point. Specifically, we will show why this special limit gives universal equations from the Painlevé II hierarchy of integrable systems theory. Then, following the approach of (bi) orthogonal polynomials introduced by Mehta to compute partition functions, we will find Riemann-Hilbert and isomonodromic problems connected to matrix models, making the link with the theory of Jimbo, Miwa and Ueno. In particular, we will describe how the hermitian two-matrix models provide a degenerate case of Jimbo-Miwa-Ueno's theory that we will generalize in this context. Furthermore, the loop equations method, with its central notions of spectral curve and topological expansion, will lead to the symplectic invariants of algebraic geometry recently proposed by Eynard and Orantin. This last point will be generalized to the case of non-hermitian matrix models (arbitrary beta) paving the way to "quantum algebraic geometry" and to the generalization of symplectic invariants to "quantum curves". Finally, this set up will be applied to combinatorics in the context of topological string theory, with the explicit computation of an hermitian random matrix model enumerating the Gromov-Witten invariants of a toric Calabi-Yau threefold.
Stochastic dynamics of time correlation in complex systems with discrete time
NASA Astrophysics Data System (ADS)
Yulmetyev, Renat; Hänggi, Peter; Gafarov, Fail
2000-11-01
In this paper we present the concept of description of random processes in complex systems with discrete time. It involves the description of kinetics of discrete processes by means of the chain of finite-difference non-Markov equations for time correlation functions (TCFs). We have introduced the dynamic (time dependent) information Shannon entropy Si(t) where i=0,1,2,3,..., as an information measure of stochastic dynamics of time correlation (i=0) and time memory (i=1,2,3,...). The set of functions Si(t) constitute the quantitative measure of time correlation disorder (i=0) and time memory disorder (i=1,2,3,...) in complex system. The theory developed started from the careful analysis of time correlation involving dynamics of vectors set of various chaotic states. We examine two stochastic processes involving the creation and annihilation of time correlation (or time memory) in details. We carry out the analysis of vectors' dynamics employing finite-difference equations for random variables and the evolution operator describing their natural motion. The existence of TCF results in the construction of the set of projection operators by the usage of scalar product operation. Harnessing the infinite set of orthogonal dynamic random variables on a basis of Gram-Shmidt orthogonalization procedure tends to creation of infinite chain of finite-difference non-Markov kinetic equations for discrete TCFs and memory functions (MFs). The solution of the equations above thereof brings to the recurrence relations between the TCF and MF of senior and junior orders. This offers new opportunities for detecting the frequency spectra of power of entropy function Si(t) for time correlation (i=0) and time memory (i=1,2,3,...). The results obtained offer considerable scope for attack on stochastic dynamics of discrete random processes in a complex systems. Application of this technique on the analysis of stochastic dynamics of RR intervals from human ECG's shows convincing evidence for a non-Markovian phenomemena associated with a peculiarities in short- and long-range scaling. This method may be of use in distinguishing healthy from pathologic data sets based in differences in these non-Markovian properties.
Quantum theory of the electronic and optical properties of low-dimensional semiconductor systems
NASA Astrophysics Data System (ADS)
Lau, Wayne Heung
This thesis examines the electronic and optical properties of low-dimensional semiconductor systems. A theory is developed to study the electron-hole generation-recombination process of type-II semimetallic semiconductor heterojunctions based on a 3 x 3 k·p matrix Hamiltonian (three-band model) and an 8 x 8 k·p matrix Hamiltonian (eight-band model). A novel electron-hole generation and recombination process, which is called activationless generation-recombination process, is predicted. It is demonstrated that the current through the type-II semimetallic semiconductor heterojunctions is governed by the activationless electron-hole generation-recombination process at the heterointerfaces, and that the current-voltage characteristics are essentially linear. A qualitative agreement between theory and experiments is observed. The numerical results of the eight-band model are compared with those of the threeband model. Based on a lattice gas model, a theory is developed to study the influence of a random potential on the ionization equilibrium conditions for bound electron-hole pairs (excitons) in III--V semiconductor heterostructures. It is demonstrated that ionization equilibrium conditions for bound electron-hole pairs change drastically in the presence of strong disorder. It is predicted that strong disorder promotes dissociation of excitons in III--V semiconductor heterostructures. A theory of polariton (photon dressed by phonon) spontaneous emission in a III--V semiconductor doped with semiconductor quantum dots (QDs) or quantum wells (QWs) is developed. For the first time, superradiant and subradiant polariton spontaneous emission phenomena in a polariton-QD (QW) coupled system are predicted when the resonance energies of the two identical QDs (QWs) lie outside the polaritonic energy gap. It is also predicted that when the resonance energies of the two identical QDs (QWs) lie inside the polaritonic energy gap, spontaneous emission of polariton in the polariton-QD (QW) coupled system is inhibited and polariton bound states are formed within the polaritonic energy gap. A theory is also developed to study the polariton eigenenergy spectrum, polariton effective mass, and polariton spectral density of N identical semiconductor QDs (QWs) or a superlattice (SL) placed inside a III--V semiconductor. A polariton-impurity band lying within the polaritonic energy gap of the III--V semiconductor is predicted when the resonance energies of the QDs (QWs) lie inside the polaritonic energy gap. Hole-like polariton effective mass of the polariton-impurity band is predicted. It is also predicted that the spectral density of the polariton has a Lorentzian shape if the resonance energies of the QDs (QWs) lie outside the polaritonic gap.
de Jorge, Mercedes; Parra, Sonia; de la Torre-Aboki, Jenny; Herrero-Beaumont, Gabriel
2015-08-01
Patients in randomized clinical trials have to adapt themselves to a restricted language to capture the necessary information to determine the safety and efficacy of a new treatment. The aim of this study was to explore the experience of patients with rheumatoid arthritis after completing their participation in a biologic therapy randomized clinical trial for a period of 3 years. A qualitative approach was used. The information was collected using 15 semi-structured interviews of patients with rheumatoid arthritis. Data collection was guided by the emergent analysis until no more relevant variations in the categories were found. The data were analysed using the grounded theory method. The objective of the patients when entering the study was to improve their quality of life by initiating the treatment. However, the experience changed the significance of the illness as they acquired skills and practical knowledge related to the management of their disease. The category "Interactional Empowerment" emerged as core category, as it represented the participative experience in a clinical trial. The process integrates the follow categories: "weight of systematisation", "working together", and the significance of the experience: "the duties". Simultaneously these categories evolved. The clinical trial monitoring activities enabled patients to engage in a reflexive-interpretative mechanism that transformed the emotional and symbolic significance of their disease and improved the empowerment of the patient. A better communicative strategy with the health professionals, the relatives of the patients, and the community was also achieved.
Theories underlying health promotion interventions among cancer survivors.
Pinto, Bernardine M; Floyd, Andrea
2008-08-01
To review the theories that have been the basis for randomized controlled trials (RCTs) promoting health behavior change among adults diagnosed and treated for cancer. Electronic databases and recent review papers. Several theories have been used in intervention development: Transtheoretical Model, Motivational Interviewing, Social Learning and Social Cognitive Theory, Theory of Planned Behavior, and Cognitive Behavioral Theory. There is support for the efficacy of some of these interventions. However, there has been limited assessment of theory-based constructs and examination of the mediational role of theoretical constructs in intervention efficacy. There is a need to apply theory in the development of interventions to assess the effects of the intervention on the constructs and to conduct mediational tests of these constructs.
ERIC Educational Resources Information Center
Aycinena, Ana Corina; Jennings, Kerri-Ann; Gaffney, Ann Ogden; Koch, Pamela A.; Contento, Isobel R.; Gonzalez, Monica; Guidon, Ela; Karmally, Wahida; Hershman, Dawn; Greenlee, Heather
2017-01-01
We developed a theory-based dietary change curriculum for Hispanic breast cancer survivors with the goal of testing the effects of the intervention on change in dietary intake of fruits/vegetables and fat in a randomized, clinical trial. Social cognitive theory and the transtheoretical model were used as theoretical frameworks to structure…
Random close packing of polydisperse jammed emulsions
NASA Astrophysics Data System (ADS)
Brujic, Jasna
2010-03-01
Packing problems are everywhere, ranging from oil extraction through porous rocks to grain storage in silos and the compaction of pharmaceutical powders into tablets. At a given density, particulate systems pack into a mechanically stable and amorphous jammed state. Theoretical frameworks have proposed a connection between this jammed state and the glass transition, a thermodynamics of jamming, as well as geometric modeling of random packings. Nevertheless, a simple underlying mechanism for the random assembly of athermal particles, analogous to crystalline ordering, remains unknown. Here we use 3D measurements of polydisperse packings of emulsion droplets to build a simple statistical model in which the complexity of the global packing is distilled into a local stochastic process. From the perspective of a single particle the packing problem is reduced to the random formation of nearest neighbors, followed by a choice of contacts among them. The two key parameters in the model, the available space around a particle and the ratio of contacts to neighbors, are directly obtained from experiments. Remarkably, we demonstrate that this ``granocentric'' view captures the properties of the polydisperse emulsion packing, ranging from the microscopic distributions of nearest neighbors and contacts to local density fluctuations and all the way to the global packing density. Further applications to monodisperse and bidisperse systems quantitatively agree with previously measured trends in global density. This model therefore reveals a general principle of organization for random packing and lays the foundations for a theory of jammed matter.
Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharroubi, Idris, E-mail: kharroubi@ceremade.dauphine.fr; Lim, Thomas, E-mail: lim@ensiie.fr; Ngoupeyou, Armand, E-mail: armand.ngoupeyou@univ-paris-diderot.fr
2013-12-15
In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory.
The timescales of global surface-ocean connectivity.
Jönsson, Bror F; Watson, James R
2016-04-19
Planktonic communities are shaped through a balance of local evolutionary adaptation and ecological succession driven in large part by migration. The timescales over which these processes operate are still largely unresolved. Here we use Lagrangian particle tracking and network theory to quantify the timescale over which surface currents connect different regions of the global ocean. We find that the fastest path between two patches--each randomly located anywhere in the surface ocean--is, on average, less than a decade. These results suggest that marine planktonic communities may keep pace with climate change--increasing temperatures, ocean acidification and changes in stratification over decadal timescales--through the advection of resilient types.
The timescales of global surface-ocean connectivity
Jönsson, Bror F.; Watson, James R.
2016-01-01
Planktonic communities are shaped through a balance of local evolutionary adaptation and ecological succession driven in large part by migration. The timescales over which these processes operate are still largely unresolved. Here we use Lagrangian particle tracking and network theory to quantify the timescale over which surface currents connect different regions of the global ocean. We find that the fastest path between two patches—each randomly located anywhere in the surface ocean—is, on average, less than a decade. These results suggest that marine planktonic communities may keep pace with climate change—increasing temperatures, ocean acidification and changes in stratification over decadal timescales—through the advection of resilient types. PMID:27093522
Generating Correlated Gamma Sequences for Sea-Clutter Simulation
2012-03-01
generation of correlated Gamma random fields via SIRP theory is examined in [Conte et al. 1991, Armstrong & Griffiths 1991]. In these papers , the Gamma...2 〉2 + |〈x[n]x∗[n+ k]〉|2 . (4) Because 〈 |x|2 〉2 = z̄2 and |〈x[n]x∗[n+ k]〉|2 ≥ 0, this results in 〈z[n]z[n+ k]〉 ≥ z̄2 if the real- isation of z[n] is...linear map- ping. In a practical situation, a process with a given auto-covariance function would be specified. It is shown that by using an
Stochastic optimal control of non-stationary response of a single-degree-of-freedom vehicle model
NASA Astrophysics Data System (ADS)
Narayanan, S.; Raju, G. V.
1990-09-01
An active suspension system to control the non-stationary response of a single-degree-of-freedom (sdf) vehicle model with variable velocity traverse over a rough road is investigated. The suspension is optimized with respect to ride comfort and road holding, using stochastic optimal control theory. The ground excitation is modelled as a spatial homogeneous random process, being the output of a linear shaping filter to white noise. The effect of the rolling contact of the tyre is considered by an additional filter in cascade. The non-stationary response with active suspension is compared with that of a passive system.
Mak, Chi H; Pham, Phuong; Afif, Samir A; Goodman, Myron F
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C→U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-01-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C → U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508
Intrinsic random functions for mitigation of atmospheric effects in terrestrial radar interferometry
NASA Astrophysics Data System (ADS)
Butt, Jemil; Wieser, Andreas; Conzett, Stefan
2017-06-01
The benefits of terrestrial radar interferometry (TRI) for deformation monitoring are restricted by the influence of changing meteorological conditions contaminating the potentially highly precise measurements with spurious deformations. This is especially the case when the measurement setup includes long distances between instrument and objects of interest and the topography affecting atmospheric refraction is complex. These situations are typically encountered with geo-monitoring in mountainous regions, e.g. with glaciers, landslides or volcanoes. We propose and explain an approach for the mitigation of atmospheric influences based on the theory of intrinsic random functions of order k (IRF-k) generalizing existing approaches based on ordinary least squares estimation of trend functions. This class of random functions retains convenient computational properties allowing for rigorous statistical inference while still permitting to model stochastic spatial phenomena which are non-stationary in mean and variance. We explore the correspondence between the properties of the IRF-k and the properties of the measurement process. In an exemplary case study, we find that our method reduces the time needed to obtain reliable estimates of glacial movements from 12 h down to 0.5 h compared to simple temporal averaging procedures.
NASA Astrophysics Data System (ADS)
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C →U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
Divergence instability of pipes conveying fluid with uncertain flow velocity
NASA Astrophysics Data System (ADS)
Rahmati, Mehdi; Mirdamadi, Hamid Reza; Goli, Sareh
2018-02-01
This article deals with investigation of probabilistic stability of pipes conveying fluid with stochastic flow velocity in time domain. As a matter of fact, this study has focused on the randomness effects of flow velocity on stability of pipes conveying fluid while most of research efforts have only focused on the influences of deterministic parameters on the system stability. The Euler-Bernoulli beam and plug flow theory are employed to model pipe structure and internal flow, respectively. In addition, flow velocity is considered as a stationary random process with Gaussian distribution. Afterwards, the stochastic averaging method and Routh's stability criterion are used so as to investigate the stability conditions of system. Consequently, the effects of boundary conditions, viscoelastic damping, mass ratio, and elastic foundation on the stability regions are discussed. Results delineate that the critical mean flow velocity decreases by increasing power spectral density (PSD) of the random velocity. Moreover, by increasing PSD from zero, the type effects of boundary condition and presence of elastic foundation are diminished, while the influences of viscoelastic damping and mass ratio could increase. Finally, to have a more applicable study, regression analysis is utilized to develop design equations and facilitate further analyses for design purposes.
A multi-assets artificial stock market with zero-intelligence traders
NASA Astrophysics Data System (ADS)
Ponta, L.; Raberto, M.; Cincotti, S.
2011-01-01
In this paper, a multi-assets artificial financial market populated by zero-intelligence traders with finite financial resources is presented. The market is characterized by different types of stocks representing firms operating in different sectors of the economy. Zero-intelligence traders follow a random allocation strategy which is constrained by finite resources, past market volatility and allocation universe. Within this framework, stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Moreover, the cross-correlations between returns of different stocks are studied using methods of random matrix theory. The probability distribution of eigenvalues of the cross-correlation matrix shows the presence of outliers, similar to those recently observed on real data for business sectors. It is worth noting that business sectors have been recovered in our framework without dividends as only consequence of random restrictions on the allocation universe of zero-intelligence traders. Furthermore, in the presence of dividend-paying stocks and in the case of cash inflow added to the market, the artificial stock market points out the same structural results obtained in the simulation without dividends. These results suggest a significative structural influence on statistical properties of multi-assets stock market.
Lagrangian particles with mixing. I. Simulating scalar transport
NASA Astrophysics Data System (ADS)
Klimenko, A. Y.
2009-06-01
The physical similarity and mathematical equivalence of continuous diffusion and particle random walk forms one of the cornerstones of modern physics and the theory of stochastic processes. The randomly walking particles do not need to posses any properties other than location in physical space. However, particles used in many models dealing with simulating turbulent transport and turbulent combustion do posses a set of scalar properties and mixing between particle properties is performed to reflect the dissipative nature of the diffusion processes. We show that the continuous scalar transport and diffusion can be accurately specified by means of localized mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. Particles with scalar properties and localized mixing represent an alternative formulation for the process, which is selected to represent the continuous diffusion. Simulating diffusion by Lagrangian particles with mixing involves three main competing requirements: minimizing stochastic uncertainty, minimizing bias introduced by numerical diffusion, and preserving independence of particles. These requirements are analyzed for two limited cases of mixing between two particles and mixing between a large number of particles. The problem of possible dependences between particles is most complicated. This problem is analyzed using a coupled chain of equations that has similarities with Bogolubov-Born-Green-Kirkwood-Yvon chain in statistical physics. Dependences between particles can be significant in close proximity of the particles resulting in a reduced rate of mixing. This work develops further ideas introduced in the previously published letter [Phys. Fluids 19, 031702 (2007)]. Paper I of this work is followed by Paper II [Phys. Fluids 19, 065102 (2009)] where modeling of turbulent reacting flows by Lagrangian particles with localized mixing is specifically considered.
Framework based on communicability and flow to analyze complex network dynamics
NASA Astrophysics Data System (ADS)
Gilson, M.; Kouvaris, N. E.; Deco, G.; Zamora-López, G.
2018-05-01
Graph theory constitutes a widely used and established field providing powerful tools for the characterization of complex networks. The intricate topology of networks can also be investigated by means of the collective dynamics observed in the interactions of self-sustained oscillations (synchronization patterns) or propagationlike processes such as random walks. However, networks are often inferred from real-data-forming dynamic systems, which are different from those employed to reveal their topological characteristics. This stresses the necessity for a theoretical framework dedicated to the mutual relationship between the structure and dynamics in complex networks, as the two sides of the same coin. Here we propose a rigorous framework based on the network response over time (i.e., Green function) to study interactions between nodes across time. For this purpose we define the flow that describes the interplay between the network connectivity and external inputs. This multivariate measure relates to the concepts of graph communicability and the map equation. We illustrate our theory using the multivariate Ornstein-Uhlenbeck process, which describes stable and non-conservative dynamics, but the formalism can be adapted to other local dynamics for which the Green function is known. We provide applications to classical network examples, such as small-world ring and hierarchical networks. Our theory defines a comprehensive framework that is canonically related to directed and weighted networks, thus paving a way to revise the standards for network analysis, from the pairwise interactions between nodes to the global properties of networks including community detection.
Presseau, Justin; Nicholas Angl, Emily; Jokhio, Iffat; Schwalm, JD; Grimshaw, Jeremy M; Bosiak, Beth; Natarajan, Madhu K; Ivers, Noah M
2017-01-01
Background Taking all recommended secondary prevention cardiac medications and fully participating in a formal cardiac rehabilitation program significantly reduces mortality and morbidity in the year following a heart attack. However, many people who have had a heart attack stop taking some or all of their recommended medications prematurely and many do not complete a formal cardiac rehabilitation program. Objective The objective of our study was to develop a user-centered, theory-based, scalable intervention of printed educational materials to encourage and support people who have had a heart attack to use recommended secondary prevention cardiac treatments. Methods Prior to the design process, we conducted theory-based interviews and surveys with patients who had had a heart attack to identify key determinants of secondary prevention behaviors. Our interdisciplinary research team then partnered with a patient advisor and design firm to undertake an iterative, theory-informed, user-centered design process to operationalize techniques to address these determinants. User-centered design requires considering users’ needs, goals, strengths, limitations, context, and intuitive processes; designing prototypes adapted to users accordingly; observing how potential users respond to the prototype; and using those data to refine the design. To accomplish these tasks, we conducted user research to develop personas (archetypes of potential users), developed a preliminary prototype using behavior change theory to map behavior change techniques to identified determinants of medication adherence, and conducted 2 design cycles, testing materials via think-aloud and semistructured interviews with a total of 11 users (10 patients who had experienced a heart attack and 1 caregiver). We recruited participants at a single cardiac clinic using purposive sampling informed by our personas. We recorded sessions with users and extracted key themes from transcripts. We held interdisciplinary team discussions to interpret findings in the context of relevant theory-based evidence and iteratively adapted the intervention accordingly. Results Through our iterative development and testing, we identified 3 key tensions: (1) evidence from theory-based studies versus users’ feelings, (2) informative versus persuasive communication, and (3) logistical constraints for the intervention versus users’ desires or preferences. We addressed these by (1) identifying root causes for users’ feelings and addressing those to better incorporate theory- and evidence-based features, (2) accepting that our intervention was ethically justified in being persuasive, and (3) making changes to the intervention where possible, such as attempting to match imagery in the materials to patients’ self-images. Conclusions Theory-informed interventions must be operationalized in ways that fit with user needs. Tensions between users’ desires or preferences and health care system goals and constraints must be identified and addressed to the greatest extent possible. A cluster randomized controlled trial of the final intervention is currently underway. PMID:28249831
The Study of Health Coaching: The Ithaca Coaching Project, Research Design, and Future Directions
2013-01-01
Health coaching (HC) is a process holding tremendous potential as a complementary medical intervention to shape healthy behavior change and affect rates of chronic lifestyle diseases. Empirical knowledge of effectiveness for the HC process, however, is lacking. The purposes of this paper are to present the study protocol for the Ithaca Coaching Project while also addressing research design, methodological issues, and directions for HC research. This is one of the first large-scale, randomized control trials of HC for primary prevention examining impact on physical and emotional health status in an employee population. An additional intent for the project is to investigate self-determination theory as a theoretical framework for the coaching process. Participants (n=300) are recruited as part of a campus-wide wellness initiative and randomly assigned to one of three levels of client-centered HC or a control with standard wellness program care. Repeated measures analyses of covariance will be used to examine coaching effectiveness while path analyses will be used to examine relationships between coaching processes, self-determination variables, and health outcomes. There is a great need for well-designed HC studies that define coaching best practices, examine intervention effectiveness, provide cost:benefit analysis, and address scope of practice. This information will allow a clearer definition of HC to emerge and determination of if, and how, HC fits in modern-day healthcare. This is an exciting but critical time for HC research and for the practice of HC. PMID:24416673