Science.gov

Sample records for randomness stochastic algorithms

  1. Convergence rates of finite difference stochastic approximation algorithms part II: implementation via common random numbers

    NASA Astrophysics Data System (ADS)

    Dai, Liyi

    2016-05-01

    Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, by approximating gradient using finite differences generated through common random numbers. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n-2/5 in general and to n-1/2, the best possible rate of stochastic approximation, in Monte Carlo optimization for a broad class of problems, in the iteration number n.

  2. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  3. Algorithm refinement for stochastic partial differential equations.

    SciTech Connect

    Alexander, F. J.; Garcia, Alejandro L.,; Tartakovsky, D. M.

    2001-01-01

    A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. A variety of numerical experiments were performed for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except within the particle region, far from the interface. Extensions of the methodology to fluid mechanics applications are discussed.

  4. Enhanced algorithms for stochastic programming

    SciTech Connect

    Krishna, A.S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.

  5. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  6. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  7. Perspective: Stochastic algorithms for chemical kinetics

    NASA Astrophysics Data System (ADS)

    Gillespie, Daniel T.; Hellander, Andreas; Petzold, Linda R.

    2013-05-01

    We outline our perspective on stochastic chemical kinetics, paying particular attention to numerical simulation algorithms. We first focus on dilute, well-mixed systems, whose description using ordinary differential equations has served as the basis for traditional chemical kinetics for the past 150 years. For such systems, we review the physical and mathematical rationale for a discrete-stochastic approach, and for the approximations that need to be made in order to regain the traditional continuous-deterministic description. We next take note of some of the more promising strategies for dealing stochastically with stiff systems, rare events, and sensitivity analysis. Finally, we review some recent efforts to adapt and extend the discrete-stochastic approach to systems that are not well-mixed. In that currently developing area, we focus mainly on the strategy of subdividing the system into well-mixed subvolumes, and then simulating diffusional transfers of reactant molecules between adjacent subvolumes together with chemical reactions inside the subvolumes.

  8. Perspective: Stochastic algorithms for chemical kinetics.

    PubMed

    Gillespie, Daniel T; Hellander, Andreas; Petzold, Linda R

    2013-05-01

    We outline our perspective on stochastic chemical kinetics, paying particular attention to numerical simulation algorithms. We first focus on dilute, well-mixed systems, whose description using ordinary differential equations has served as the basis for traditional chemical kinetics for the past 150 years. For such systems, we review the physical and mathematical rationale for a discrete-stochastic approach, and for the approximations that need to be made in order to regain the traditional continuous-deterministic description. We next take note of some of the more promising strategies for dealing stochastically with stiff systems, rare events, and sensitivity analysis. Finally, we review some recent efforts to adapt and extend the discrete-stochastic approach to systems that are not well-mixed. In that currently developing area, we focus mainly on the strategy of subdividing the system into well-mixed subvolumes, and then simulating diffusional transfers of reactant molecules between adjacent subvolumes together with chemical reactions inside the subvolumes.

  9. Stochastic template placement algorithm for gravitational wave data analysis

    SciTech Connect

    Harry, I. W.; Sathyaprakash, B. S.; Allen, B.

    2009-11-15

    This paper presents an algorithm for constructing matched-filter template banks in an arbitrary parameter space. The method places templates at random, then removes those which are 'too close' together. The properties and optimality of stochastic template banks generated in this manner are investigated for some simple models. The effectiveness of these template banks for gravitational wave searches for binary inspiral waveforms is also examined. The properties of a stochastic template bank are then compared to the deterministically placed template banks that are currently used in gravitational wave data analysis.

  10. Random musings on stochastics (Lorenz Lecture)

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2014-12-01

    In 1960 Lorenz identified the chaotic nature of atmospheric dynamics, thus highlighting the importance of the discovery of chaos by Poincare, 70 years earlier, in the motion of three bodies. Chaos in the macroscopic world offered a natural way to explain unpredictability, that is, randomness. Concurrently with Poincare's discovery, Boltzmann introduced statistical physics, while soon after Borel and Lebesgue laid the foundation of measure theory, later (in 1930s) used by Kolmogorov as the formal foundation of probability theory. Subsequently, Kolmogorov and Khinchin introduced the concepts of stochastic processes and stationarity, and advanced the concept of ergodicity. All these areas are now collectively described by the term "stochastics", which includes probability theory, stochastic processes and statistics. As paradoxical as it may seem, stochastics offers the tools to deal with chaos, even if it results from deterministic dynamics. As chaos entails uncertainty, it is more informative and effective to replace the study of exact system trajectories with that of probability densities. Also, as the exact laws of complex systems can hardly be deduced by synthesis of the detailed interactions of system components, these laws should inevitably be inferred by induction, based on observational data and using statistics. The arithmetic of stochastics is quite different from that of regular numbers. Accordingly, it needs the development of intuition and interpretations which differ from those built upon deterministic considerations. Using stochastic tools in a deterministic context may result in mistaken conclusions. In an attempt to contribute to a more correct interpretation and use of stochastic concepts in typical tasks of nonlinear systems, several examples are studied, which aim (a) to clarify the difference in the meaning of linearity in deterministic and stochastic context; (b) to contribute to a more attentive use of stochastic concepts (entropy, statistical

  11. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  12. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  13. An algorithm for multivariate weak stochastic dominance

    SciTech Connect

    Mosler, K.

    1994-12-31

    The talk addresses the computational problem of comparing two given probability distributions in n-space with respect to several stochastic orderings. The orderings investigated are weak first degree stochastic dominance, weak second degree stochastic dominance, and their dual ordering relations. For each of the four dominance relations we present conditions which are necessary and sufficient for dominance of F over G when F and G have finite support in n-space. An algorithm is proposed which operates efficiently on the join-semilattice generated by their joint support. If F and G are empirical distribution functions, and {anti F} and {anti G}denote the underlying probability laws, significance tests can be performed on {anti F} = {anti G} against the alternative that {anti F} {ne} {anti G} and {anti F} dominates {anti G} in one of the four orderings. Other applications are found in decision theory, applied probability, operations research, and economics.

  14. A hierarchical exact accelerated stochastic simulation algorithm

    PubMed Central

    Orendorff, David; Mjolsness, Eric

    2012-01-01

    A new algorithm, “HiER-leap” (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled “blocks” and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms. PMID:23231214

  15. Perspective: Stochastic algorithms for chemical kinetics

    PubMed Central

    Gillespie, Daniel T.; Hellander, Andreas; Petzold, Linda R.

    2013-01-01

    We outline our perspective on stochastic chemical kinetics, paying particular attention to numerical simulation algorithms. We first focus on dilute, well-mixed systems, whose description using ordinary differential equations has served as the basis for traditional chemical kinetics for the past 150 years. For such systems, we review the physical and mathematical rationale for a discrete-stochastic approach, and for the approximations that need to be made in order to regain the traditional continuous-deterministic description. We next take note of some of the more promising strategies for dealing stochastically with stiff systems, rare events, and sensitivity analysis. Finally, we review some recent efforts to adapt and extend the discrete-stochastic approach to systems that are not well-mixed. In that currently developing area, we focus mainly on the strategy of subdividing the system into well-mixed subvolumes, and then simulating diffusional transfers of reactant molecules between adjacent subvolumes together with chemical reactions inside the subvolumes. PMID:23656106

  16. Stochastic algorithms for Markov models estimation with intermittent missing data.

    PubMed

    Deltour, I; Richardson, S; Le Hesran, J Y

    1999-06-01

    Multistate Markov models are frequently used to characterize disease processes, but their estimation from longitudinal data is often hampered by complex patterns of incompleteness. Two algorithms for estimating Markov chain models in the case of intermittent missing data in longitudinal studies, a stochastic EM algorithm and the Gibbs sampler, are described. The first can be viewed as a random perturbation of the EM algorithm and is appropriate when the M step is straightforward but the E step is computationally burdensome. It leads to a good approximation of the maximum likelihood estimates. The Gibbs sampler is used for a full Bayesian inference. The performances of the two algorithms are illustrated on two simulated data sets. A motivating example concerned with the modelling of the evolution of parasitemia by Plasmodium falciparum (malaria) in a cohort of 105 young children in Cameroon is described and briefly analyzed.

  17. Constant-complexity stochastic simulation algorithm with optimal binning

    SciTech Connect

    Sanft, Kevin R.; Othmer, Hans G.

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  18. Constant-complexity stochastic simulation algorithm with optimal binning

    NASA Astrophysics Data System (ADS)

    Sanft, Kevin R.; Othmer, Hans G.

    2015-08-01

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie's Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  19. Randomized approximate nearest neighbors algorithm.

    PubMed

    Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir

    2011-09-20

    We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.

  20. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  1. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  2. A space-time cluster algorithm for stochastic processes.

    SciTech Connect

    Gulbahce, N.

    2003-01-01

    We introduce a space-time cluster algorithm that will generate histories of stochastic processes. Michael Zimmer introduced a spacetime MC algorithm for stochastic classical dynamics and he applied it to simulate Ising model with Glauber dynamics. Following his steps, we extended Brower and Tamayo's embedded {phi}{sup 4} dynamics to space and time. We believe our algorithm can be applied to more general stochastic systems. Why space-time? To be able to study nonequilibrium systems, we need to know the probability of the 'history' of a nonequilibrium state. Histories are the entire space-time configurations. Cluster algorithms first introduced by SW, are useful to overcome critical slowing down. Brower and Tamayo have mapped continous field variables to Ising spins, and have grown and flipped SW clusters to gain speed. Our algorithm is an extended version of theirs to space and time.

  3. Stochastic reaction–diffusion algorithms for macromolecular crowding

    NASA Astrophysics Data System (ADS)

    Sturrock, Marc

    2016-06-01

    Compartment-based (lattice-based) reaction–diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction–diffusion simulations is investigated. Reaction–diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35–53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.

  4. Multiscale stochastic simulation algorithm with stochastic partial equilibrium assumption for chemically reacting systems

    SciTech Connect

    Cao Yang . E-mail: ycao@cs.ucsb.edu; Gillespie, Dan . E-mail: GillespieDT@mailaps.org; Petzold, Linda . E-mail: petzold@engineering.ucsb.edu

    2005-07-01

    In this paper, we introduce a multiscale stochastic simulation algorithm (MSSA) which makes use of Gillespie's stochastic simulation algorithm (SSA) together with a new stochastic formulation of the partial equilibrium assumption (PEA). This method is much more efficient than SSA alone. It works even with a very small population of fast species. Implementation details are discussed, and an application to the modeling of the heat shock response of E. Coli is presented which demonstrates the excellent efficiency and accuracy obtained with the new method.

  5. Random attractor of non-autonomous stochastic Boussinesq lattice system

    SciTech Connect

    Zhao, Min Zhou, Shengfan

    2015-09-15

    In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.

  6. A GENERALIZED STOCHASTIC COLLOCATION APPROACH TO CONSTRAINED OPTIMIZATION FOR RANDOM DATA IDENTIFICATION PROBLEMS

    SciTech Connect

    Webster, Clayton G; Gunzburger, Max D

    2013-01-01

    We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical

  7. Stochastic theory of the Stokes parameters in randomly twisted fiber

    SciTech Connect

    Botet, Robert; Kuratsuji, Hiroshi

    2011-03-15

    We present the stochastic approach of the polarization state of an electromagnetic wave traveling through randomly twisted optical fiber. We treat the case of the weak randomness. When the geometric torsion of the fiber is distributed as a Gaussian law, we can write explicitly the Fokker-Planck equation for the Stokes parameters of the wave, and find the complete solution of the polarization-state distribution.

  8. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  9. Pathwise random periodic solutions of stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Feng, Chunrong; Zhao, Huaizhong; Zhou, Bo

    In this paper, we study the existence of random periodic solutions for semilinear stochastic differential equations. We identify these as the solutions of coupled forward-backward infinite horizon stochastic integral equations in general cases. We then use the argument of the relative compactness of Wiener-Sobolev spaces in C([0,T],L(Ω)) and generalized Schauder's fixed point theorem to prove the existence of a solution of the coupled stochastic forward-backward infinite horizon integral equations. The condition on F is then further weakened by applying the coupling method of forward and backward Gronwall inequalities. The results are also valid for stationary solutions as a special case when the period τ can be an arbitrary number.

  10. Quantum stochastic walks: A generalization of classical random walks and quantum walks

    NASA Astrophysics Data System (ADS)

    Aspuru-Guzik, Alan

    2010-03-01

    We introduce the quantum stochastic walk (QSW), which determines the evolution of generalized quantum mechanical walk on a graph that obeys a quantum stochastic equation of motion. Using an axiomatic approach, we specify the rules for all possible quantum, classical and quantum-stochastic transitions from a vertex as defined by its connectivity. We show how the family of possible QSWs encompasses both the classical random walk (CRW) and the quantum walk (QW) as special cases, but also includes more general probability distributions. As an example, we study the QSW on a line, the QW to CRW transition and transitions to genearlized QSWs that go beyond the CRW and QW. QSWs provide a new framework to the study of quantum algorithms as well as of quantum walks with environmental effects.

  11. Stochastic oscillator with random mass: New type of Brownian motion

    NASA Astrophysics Data System (ADS)

    Gitterman, M.

    2014-02-01

    The model of a stochastic oscillator subject to additive random force, which includes the Brownian motion, is widely used for analysis of different phenomena in physics, chemistry, biology, economics and social science. As a rule, by the appropriate choice of units one assumes that the particle’s mass is equal to unity. However, for the case of an additional multiplicative random force, the situation is more complicated. As we show in this review article, for the cases of random frequency or random damping, the mass cannot be excluded from the equations of motion, and, for example, besides the restriction of the size of Brownian particle, some restrictions exist also of its mass. In addition to these two types of multiplicative forces, we consider the random mass, which describes, among others, the Brownian motion with adhesion. The fluctuations of mass are modeled as a dichotomous noise, and the first two moments of coordinates show non-monotonic dependence on the parameters of oscillator and noise. In the presence of an additional periodic force an oscillator with random mass is characterized by the stochastic resonance phenomenon, when the appearance of noise increases the input signal.

  12. STP: A Stochastic Tunneling Algorithm for Global Optimization

    SciTech Connect

    Oblow, E.M.

    1999-05-20

    A stochastic approach to solving continuous function global optimization problems is presented. It builds on the tunneling approach to deterministic optimization presented by Barhen et al, by combining a series of local descents with stochastic searches. The method uses a rejection-based stochastic procedure to locate new local minima descent regions and a fixed Lipschitz-like constant to reject unpromising regions in the search space, thereby increasing the efficiency of the tunneling process. The algorithm is easily implemented in low-dimensional problems and scales easily to large problems. It is less effective without further heuristics in these latter cases, however. Several improvements to the basic algorithm which make use of approximate estimates of the algorithms parameters for implementation in high-dimensional problems are also discussed. Benchmark results are presented, which show that the algorithm is competitive with the best previously reported global optimization techniques. A successful application of the approach to a large-scale seismology problem of substantial computational complexity using a low-dimensional approximation scheme is also reported.

  13. Robust algorithms for solving stochastic partial differential equations

    SciTech Connect

    Werner, M.J.; Drummond, P.D.

    1997-04-01

    A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in X{sup 2} parametric waveguides. This example uses non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used will be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. 27 refs., 4 figs.

  14. Stochastic calculus for uncoupled continuous-time random walks.

    PubMed

    Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L

    2009-06-01

    The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy alpha -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.

  15. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  16. Randomized Algorithms for Matrices and Data

    NASA Astrophysics Data System (ADS)

    Mahoney, Michael W.

    2012-03-01

    This chapter reviews recent work on randomized matrix algorithms. By “randomized matrix algorithms,” we refer to a class of recently developed random sampling and random projection algorithms for ubiquitous linear algebra problems such as least-squares (LS) regression and low-rank matrix approximation. These developments have been driven by applications in large-scale data analysis—applications which place very different demands on matrices than traditional scientific computing applications. Thus, in this review, we will focus on highlighting the simplicity and generality of several core ideas that underlie the usefulness of these randomized algorithms in scientific applications such as genetics (where these algorithms have already been applied) and astronomy (where, hopefully, in part due to this review they will soon be applied). The work we will review here had its origins within theoretical computer science (TCS). An important feature in the use of randomized algorithms in TCS more generally is that one must identify and then algorithmically deal with relevant “nonuniformity structure” in the data. For the randomized matrix algorithms to be reviewed here and that have proven useful recently in numerical linear algebra (NLA) and large-scale data analysis applications, the relevant nonuniformity structure is defined by the so-called statistical leverage scores. Defined more precisely below, these leverage scores are basically the diagonal elements of the projection matrix onto the dominant part of the spectrum of the input matrix. As such, they have a long history in statistical data analysis, where they have been used for outlier detection in regression diagnostics. More generally, these scores often have a very natural interpretation in terms of the data and processes generating the data. For example, they can be interpreted in terms of the leverage or influence that a given data point has on, say, the best low-rank matrix approximation; and this

  17. Computing gap free Pareto front approximations with stochastic search algorithms.

    PubMed

    Schütze, Oliver; Laumanns, Marco; Tantar, Emilia; Coello, Carlos A Coello; Talbi, El-Ghazali

    2010-01-01

    Recently, a convergence proof of stochastic search algorithms toward finite size Pareto set approximations of continuous multi-objective optimization problems has been given. The focus was on obtaining a finite approximation that captures the entire solution set in some suitable sense, which was defined by the concept of epsilon-dominance. Though bounds on the quality of the limit approximation-which are entirely determined by the archiving strategy and the value of epsilon-have been obtained, the strategies do not guarantee to obtain a gap free approximation of the Pareto front. That is, such approximations A can reveal gaps in the sense that points f in the Pareto front can exist such that the distance of f to any image point F(a), a epsilon A, is "large." Since such gap free approximations are desirable in certain applications, and the related archiving strategies can be advantageous when memetic strategies are included in the search process, we are aiming in this work for such methods. We present two novel strategies that accomplish this task in the probabilistic sense and under mild assumptions on the stochastic search algorithm. In addition to the convergence proofs, we give some numerical results to visualize the behavior of the different archiving strategies. Finally, we demonstrate the potential for a possible hybridization of a given stochastic search algorithm with a particular local search strategy-multi-objective continuation methods-by showing that the concept of epsilon-dominance can be integrated into this approach in a suitable way.

  18. Decomposition algorithms for stochastic programming on a computational grid.

    SciTech Connect

    Linderoth, J.; Wright, S.; Mathematics and Computer Science; Axioma Inc.

    2003-01-01

    We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems), and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample-average approximations of problems from the literature.

  19. A generalization to stochastic averaging in random vibration

    SciTech Connect

    Red-Horse, J.R.

    1992-06-01

    Stochastic Averaging is applied to a class of randomly excited single- degree-of-freedom oscillators possessing linear damping and nonlinear stiffness terms. The assumed excitation form involves an externally applied evolutionary Gaussian stochastic process. Special emphasis is placed on casting the problem in a more formal mathematical framework than that traditionally used in engineering applications. For the case under consideration, it is shown that a critical step involves the selection of an appropriate period of oscillation over which the temporal averaging can be performed. As an example, this averaging procedure is performed on a Duffing oscillator. The validity of the derived result is partially confirmed by reducing it is to special case, for which there is a known solution, and comparing both solutions.

  20. Random attractors for the stochastic coupled fractional Ginzburg-Landau equation with additive noise

    SciTech Connect

    Shu, Ji E-mail: 530282863@qq.com; Li, Ping E-mail: 530282863@qq.com; Zhang, Jia; Liao, Ou

    2015-10-15

    This paper is concerned with the stochastic coupled fractional Ginzburg-Landau equation with additive noise. We first transform the stochastic coupled fractional Ginzburg-Landau equation into random equations whose solutions generate a random dynamical system. Then we prove the existence of random attractor for random dynamical system.

  1. Stochastic deletion-insertion algorithm to construct dense linkage maps

    PubMed Central

    Wu, Jixiang; Lou, Xiang-Yang; Gonda, Michael

    2011-01-01

    In this study, we proposed a stochastic deletion-insertion (SDI) algorithm for constructing large-scale linkage maps. This SDI algorithm was compared with three published approximation approaches, the seriation (SER), neighbor mapping (NM), and unidirectional growth (UG) approaches, on the basis of simulated F2 data with different population sizes, missing genotype rates, and numbers of markers. Simulation results showed that the SDI method had a similar or higher percentage of correct linkage orders than the other three methods. This SDI algorithm was also applied to a real dataset and compared with the other three methods. The total linkage map distance (cM) obtained by the SDI method (148.08 cM) was smaller than the distance obtained by SER (225.52 cM) and two published distances (150.11 cM and 150.38 cM). Since this SDI algorithm is stochastic, a more accurate linkage order can be quickly obtained by repeating this algorithm. Thus, this SDI method, which combines the advantages of accuracy and speed, is an important addition to the current linkage mapping toolkit for constructing improved linkage maps. PMID:21927641

  2. Implementing Quality Control on a Random Number Stream to Improve a Stochastic Weather Generator

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For decades stochastic modelers have used computerized random number generators to produce random numeric sequences fitting a specified statistical distribution. Unfortunately, none of the random number generators we tested satisfactorily produced the target distribution. The result is generated d...

  3. Efficient stochastic Galerkin methods for random diffusion equations

    SciTech Connect

    Xiu Dongbin Shen Jie

    2009-02-01

    We discuss in this paper efficient solvers for stochastic diffusion equations in random media. We employ generalized polynomial chaos (gPC) expansion to express the solution in a convergent series and obtain a set of deterministic equations for the expansion coefficients by Galerkin projection. Although the resulting system of diffusion equations are coupled, we show that one can construct fast numerical methods to solve them in a decoupled fashion. The methods are based on separation of the diagonal terms and off-diagonal terms in the matrix of the Galerkin system. We examine properties of this matrix and show that the proposed method is unconditionally stable for unsteady problems and convergent for steady problems with a convergent rate independent of discretization parameters. Numerical examples are provided, for both steady and unsteady random diffusions, to support the analysis.

  4. An adaptive multi-level simulation algorithm for stochastic biological systems

    SciTech Connect

    Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  5. An adaptive multi-level simulation algorithm for stochastic biological systems

    NASA Astrophysics Data System (ADS)

    Lester, C.; Yates, C. A.; Giles, M. B.; Baker, R. E.

    2015-01-01

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  6. A stochastic maximum principle for backward control systems with random default time

    NASA Astrophysics Data System (ADS)

    Shen, Yang; Kuen Siu, Tak

    2013-05-01

    This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.

  7. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    SciTech Connect

    DeVille, R.E.L.; Riemer, N.; West, M.

    2011-09-20

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.

  8. Stochastic simulation algorithm for the quantum linear Boltzmann equation.

    PubMed

    Busse, Marc; Pietrulewicz, Piotr; Breuer, Heinz-Peter; Hornberger, Klaus

    2010-08-01

    We develop a Monte Carlo wave function algorithm for the quantum linear Boltzmann equation, a Markovian master equation describing the quantum motion of a test particle interacting with the particles of an environmental background gas. The algorithm leads to a numerically efficient stochastic simulation procedure for the most general form of this integrodifferential equation, which involves a five-dimensional integral over microscopically defined scattering amplitudes that account for the gas interactions in a nonperturbative fashion. The simulation technique is used to assess various limiting forms of the quantum linear Boltzmann equation, such as the limits of pure collisional decoherence and quantum Brownian motion, the Born approximation, and the classical limit. Moreover, we extend the method to allow for the simulation of the dissipative and decohering dynamics of superpositions of spatially localized wave packets, which enables the study of many physically relevant quantum phenomena, occurring e.g., in the interferometry of massive particles.

  9. Hybrid solution of stochastic optimal control problems using Gauss pseudospectral method and generalized polynomial chaos algorithms

    NASA Astrophysics Data System (ADS)

    Cottrill, Gerald C.

    A hybrid numerical algorithm combining the Gauss Pseudospectral Method (GPM) with a Generalized Polynomial Chaos (gPC) method to solve nonlinear stochastic optimal control problems with constraint uncertainties is presented. TheGPM and gPC have been shown to be spectrally accurate numerical methods for solving deterministic optimal control problems and stochastic differential equations, respectively. The gPC uses collocation nodes to sample the random space, which are then inserted into the differential equations and solved by applying standard differential equation methods. The resulting set of deterministic solutions is used to characterize the distribution of the solution by constructing a polynomial representation of the output as a function of uncertain parameters. Optimal control problems are especially challenging to solve since they often include path constraints, bounded controls, boundary conditions, and require solutions that minimize a cost functional. Adding random parameters can make these problems even more challenging. The hybrid algorithm presented in this dissertation is the first time the GPM and gPC algorithms have been combined to solve optimal control problems with random parameters. Using the GPM in the gPC construct provides minimum cost deterministic solutions used in stochastic computations that meet path, control, and boundary constraints, thus extending current gPC methods to be applicable to stochastic optimal control problems. The hybrid GPM-gPC algorithm was applied to two concept demonstration problems: a nonlinear optimal control problem with multiplicative uncertain elements and a trajectory optimization problem simulating an aircraft flying through a threat field where exact locations of the threats are unknown. The results show that the expected value, variance, and covariance statistics of the polynomial output function approximations of the state, control, cost, and terminal time variables agree with Monte-Carlo simulation

  10. Stochastic permanence of an SIQS epidemic model with saturated incidence and independent random perturbations

    NASA Astrophysics Data System (ADS)

    Wei, Fengying; Chen, Fangxiang

    2016-07-01

    This article discusses a stochastic SIQS epidemic model with saturated incidence. We assume that random perturbations always fluctuate at the endemic equilibrium. The existence of a global positive solution is obtained by constructing a suitable Lyapunov function. Under some suitable conditions, we derive the stochastic boundedness and stochastic permanence of the solutions of a stochastic SIQS model. Some numerical simulations are carried out to check our results.

  11. Stochastic models for horizontal gene transfer: taking a random walk through tree space.

    PubMed

    Suchard, Marc A

    2005-05-01

    Horizontal gene transfer (HGT) plays a critical role in evolution across all domains of life with important biological and medical implications. I propose a simple class of stochastic models to examine HGT using multiple orthologous gene alignments. The models function in a hierarchical phylogenetic framework. The top level of the hierarchy is based on a random walk process in "tree space" that allows for the development of a joint probabilistic distribution over multiple gene trees and an unknown, but estimable species tree. I consider two general forms of random walks. The first form is derived from the subtree prune and regraft (SPR) operator that mirrors the observed effects that HGT has on inferred trees. The second form is based on walks over complete graphs and offers numerically tractable solutions for an increasing number of taxa. The bottom level of the hierarchy utilizes standard phylogenetic models to reconstruct gene trees given multiple gene alignments conditional on the random walk process. I develop a well-mixing Markov chain Monte Carlo algorithm to fit the models in a Bayesian framework. I demonstrate the flexibility of these stochastic models to test competing ideas about HGT by examining the complexity hypothesis. Using 144 orthologous gene alignments from six prokaryotes previously collected and analyzed, Bayesian model selection finds support for (1) the SPR model over the alternative form, (2) the 16S rRNA reconstruction as the most likely species tree, and (3) increased HGT of operational genes compared to informational genes.

  12. Hierarchical Stochastic Simulation Algorithm for SBML Models of Genetic Circuits.

    PubMed

    Watanabe, Leandro H; Myers, Chris J

    2014-01-01

    This paper describes a hierarchical stochastic simulation algorithm, which has been implemented within iBioSim, a tool used to model, analyze, and visualize genetic circuits. Many biological analysis tools flatten out hierarchy before simulation, but there are many disadvantages associated with this approach. First, the memory required to represent the model can quickly expand in the process. Second, the flattening process is computationally expensive. Finally, when modeling a dynamic cellular population within iBioSim, inlining the hierarchy of the model is inefficient since models must grow dynamically over time. This paper discusses a new approach to handle hierarchy on the fly to make the tool faster and more memory-efficient. This approach yields significant performance improvements as compared to the former flat analysis method.

  13. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2001-12-14

    Recent progress in simulation methodologies and new, high-performance parallel architectures have made it is possible to perform detailed simulations of multidimensional combustion phenomena using comprehensive kinetics mechanisms. However, as simulation complexity increases, it becomes increasingly difficult to extract detailed quantitative information about the flame from the numerical solution, particularly regarding the details of chemical processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of combustion phenomena. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian viewpoint in which we follow the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system. From this perspective an ''atom'' is part of some molecule that is transported through the domain by advection and diffusion. Reactions ca use the atom to shift from one species to another with the subsequent transport given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion as a suitable random-walk process. Within this probabilistic framework, reactions can be viewed as a Markov process transforming molecule to molecule with given probabilities. In this paper, we discuss the numerical issues in more detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. We also illustrate how the method can be applied to studying the role of cyanochemistry on NOx production in a diffusion flame.

  14. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  15. Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs

    DOE PAGESBeta

    Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; Watson, Jean -Paul; Wets, Roger J.-B.; Woodruff, David L.

    2016-04-02

    We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.

  16. Efficient generation and optimization of stochastic template banks by a neighboring cell algorithm

    NASA Astrophysics Data System (ADS)

    Fehrmann, Henning; Pletsch, Holger J.

    2014-12-01

    Placing signal templates (grid points) as efficiently as possible to cover a multidimensional parameter space is crucial in computing-intensive matched-filtering searches for gravitational waves, but also in similar searches in other fields of astronomy. To generate efficient coverings of arbitrary parameter spaces, stochastic template banks have been advocated, where templates are placed at random while rejecting those too close to others. However, in this simple scheme, for each new random point its distance to every template in the existing bank is computed. This rapidly increasing number of distance computations can render the acceptance of new templates computationally prohibitive, particularly for wide parameter spaces or in large dimensions. This paper presents a neighboring cell algorithm that can dramatically improve the efficiency of constructing a stochastic template bank. By dividing the parameter space into subvolumes (cells), for an arbitrary point an efficient hashing technique is exploited to obtain the index of its enclosing cell along with the parameters of its neighboring templates. Hence only distances to these neighboring templates in the bank are computed, massively lowering the overall computing cost, as demonstrated in simple examples. Furthermore, we propose a novel method based on this technique to increase the fraction of covered parameter space solely by directed template shifts, without adding any templates. As is demonstrated in examples, this method can be highly effective.

  17. A parallel algorithm for random searches

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  18. Two methods of random seed generation to avoid over-segmentation with stochastic watershed: application to nuclear fuel micrographs.

    PubMed

    Tolosa, S Cativa; Blacher, S; Denis, A; Marajofsky, A; Pirard, J-P; Gommes, C J

    2009-10-01

    A stochastic version of the watershed algorithm is obtained by choosing randomly in the image the seeds from which the watershed regions are grown. The output of the procedure is a probability density function corresponding to the probability that each pixel belongs to a boundary. In the present paper, two stochastic seed-generation processes are explored to avoid over-segmentation. The first is a non-uniform Poisson process, the density of which is optimized on the basis of opening granulometry. The second process positions the seeds randomly within disks centred on the maxima of a distance map. The two methods are applied to characterize the grain structure of nuclear fuel pellets. Estimators are proposed for the total edge length and grain number per unit area, L(A) and N(A), which take advantage of the probabilistic nature of the probability density function and do not require segmentation.

  19. Non-divergence of stochastic discrete time algorithms for PCA neural networks.

    PubMed

    Lv, Jian Cheng; Yi, Zhang; Li, Yunxia

    2015-02-01

    Learning algorithms play an important role in the practical application of neural networks based on principal component analysis, often determining the success, or otherwise, of these applications. These algorithms cannot be divergent, but it is very difficult to directly study their convergence properties, because they are described by stochastic discrete time (SDT) algorithms. This brief analyzes the original SDT algorithms directly, and derives some invariant sets that guarantee the nondivergence of these algorithms in a stochastic environment by selecting proper learning parameters. Our theoretical results are verified by a series of simulation examples.

  20. Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations

    SciTech Connect

    Zhou Tao; Tang Tao

    2010-11-01

    In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.

  1. On Wiener-Masani's algorithm for finding the generating function of multivariate stochastic processes

    NASA Technical Reports Server (NTRS)

    Miamee, A. G.

    1988-01-01

    It is shown that the algorithms for determining the generating function and prediction error matrix of multivariate stationary stochastic processes developed by Wiener and Masani (1957), and later by Masani (1960) will work in some more general setting.

  2. Convergence rates of finite difference stochastic approximation algorithms part I: general sampling

    NASA Astrophysics Data System (ADS)

    Dai, Liyi

    2016-05-01

    Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. The analysis is carried out under a general framework covering a wide range of updating scenarios. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences.

  3. A linear modulation-based stochastic resonance algorithm applied to the detection of weak chromatographic peaks.

    PubMed

    Deng, Haishan; Xiang, Bingren; Liao, Xuewei; Xie, Shaofei

    2006-12-01

    A simple stochastic resonance algorithm based on linear modulation was developed to amplify and detect weak chromatographic peaks. The output chromatographic peak is often distorted when using the traditional stochastic resonance algorithm due to the presence of high levels of noise. In the new algorithm, a linear modulated double-well potential is introduced to correct for the distortion of the output peak. Method parameter selection is convenient and intuitive for linear modulation. In order to achieve a better signal-to-noise ratio for the output signal, the performance of two-layer stochastic resonance was evaluated by comparing it with wavelet-based stochastic resonance. The proposed algorithm was applied to the quantitative analysis of dimethyl sulfide and the determination of chloramphenicol residues in milk, and the good linearity of the method demonstrated that it is an effective tool for detecting weak chromatographic peaks.

  4. Fluorescence microscopy image noise reduction using a stochastically-connected random field model

    PubMed Central

    Haider, S. A.; Cameron, A.; Siva, P.; Lui, D.; Shafiee, M. J.; Boroomand, A.; Haider, N.; Wong, A.

    2016-01-01

    Fluorescence microscopy is an essential part of a biologist’s toolkit, allowing assaying of many parameters like subcellular localization of proteins, changes in cytoskeletal dynamics, protein-protein interactions, and the concentration of specific cellular ions. A fundamental challenge with using fluorescence microscopy is the presence of noise. This study introduces a novel approach to reducing noise in fluorescence microscopy images. The noise reduction problem is posed as a Maximum A Posteriori estimation problem, and solved using a novel random field model called stochastically-connected random field (SRF), which combines random graph and field theory. Experimental results using synthetic and real fluorescence microscopy data show the proposed approach achieving strong noise reduction performance when compared to several other noise reduction algorithms, using quantitative metrics. The proposed SRF approach was able to achieve strong performance in terms of signal-to-noise ratio in the synthetic results, high signal to noise ratio and contrast to noise ratio in the real fluorescence microscopy data results, and was able to maintain cell structure and subtle details while reducing background and intra-cellular noise. PMID:26884148

  5. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  6. Emergence of patterns in random processes. II. Stochastic structure in random events.

    PubMed

    Newman, William I

    2014-06-01

    Random events can present what appears to be a pattern in the length of peak-to-peak sequences in time series and other point processes. Previously, we showed that this was the case in both individual and independently distributed processes as well as for Brownian walks. In addition, we introduced the use of the discrete form of the Langevin equation of statistical mechanics as a device for connecting the two limiting sets of behaviors, which we then compared with a variety of observations from the physical and social sciences. Here, we establish a probabilistic framework via the Smoluchowski equation for exploring the Langevin equation and its expected peak-to-peak sequence lengths, and we introduce a concept we call "stochastic structure in random events," or SSRE. We extend the Brownian model to include antipersistent processes via autoregressive (AR) models. We relate the latter to describe the behavior of Old Faithful Geyser in Yellowstone National Park, and we devise a further test for the validity of the Langevin and AR models. Given our analytic results, we show how the Langevin equation can be adapted to describe population cycles of three to four years observed among many mammalian species in biology.

  7. Emergence of patterns in random processes. II. Stochastic structure in random events.

    PubMed

    Newman, William I

    2014-06-01

    Random events can present what appears to be a pattern in the length of peak-to-peak sequences in time series and other point processes. Previously, we showed that this was the case in both individual and independently distributed processes as well as for Brownian walks. In addition, we introduced the use of the discrete form of the Langevin equation of statistical mechanics as a device for connecting the two limiting sets of behaviors, which we then compared with a variety of observations from the physical and social sciences. Here, we establish a probabilistic framework via the Smoluchowski equation for exploring the Langevin equation and its expected peak-to-peak sequence lengths, and we introduce a concept we call "stochastic structure in random events," or SSRE. We extend the Brownian model to include antipersistent processes via autoregressive (AR) models. We relate the latter to describe the behavior of Old Faithful Geyser in Yellowstone National Park, and we devise a further test for the validity of the Langevin and AR models. Given our analytic results, we show how the Langevin equation can be adapted to describe population cycles of three to four years observed among many mammalian species in biology. PMID:25019731

  8. Emergence of patterns in random processes. II. Stochastic structure in random events

    NASA Astrophysics Data System (ADS)

    Newman, William I.

    2014-06-01

    Random events can present what appears to be a pattern in the length of peak-to-peak sequences in time series and other point processes. Previously, we showed that this was the case in both individual and independently distributed processes as well as for Brownian walks. In addition, we introduced the use of the discrete form of the Langevin equation of statistical mechanics as a device for connecting the two limiting sets of behaviors, which we then compared with a variety of observations from the physical and social sciences. Here, we establish a probabilistic framework via the Smoluchowski equation for exploring the Langevin equation and its expected peak-to-peak sequence lengths, and we introduce a concept we call "stochastic structure in random events," or SSRE. We extend the Brownian model to include antipersistent processes via autoregressive (AR) models. We relate the latter to describe the behavior of Old Faithful Geyser in Yellowstone National Park, and we devise a further test for the validity of the Langevin and AR models. Given our analytic results, we show how the Langevin equation can be adapted to describe population cycles of three to four years observed among many mammalian species in biology.

  9. One-dimensional random field Ising model and discrete stochastic mappings

    SciTech Connect

    Behn, U.; Zagrebnov, V.A.

    1987-06-01

    Previous results relating the one-dimensional random field Ising model to a discrete stochastic mapping are generalized to a two-valued correlated random (Markovian) field and to the case of zero temperature. The fractal dimension of the support of the invariant measure is calculated in a simple approximation and its dependence on the physical parameters is discussed.

  10. Mathematical analysis and algorithms for efficiently and accurately implementing stochastic simulations of short-term synaptic depression and facilitation.

    PubMed

    McDonnell, Mark D; Mohan, Ashutosh; Stricker, Christian

    2013-01-01

    The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential (AP) at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of AP arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic AP, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms.

  11. Cover art: Issues in the metric-guided and metric-less placement of random and stochastic template banks

    SciTech Connect

    Manca, Gian Mario; Vallisneri, Michele

    2010-01-15

    The efficient placement of signal templates in source-parameter space is a crucial requisite for exhaustive matched-filtering searches of modeled gravitational-wave sources, as well as other searches based on more general detection statistics. Unfortunately, the current placement algorithms based on regular parameter-space meshes are difficult to generalize beyond simple signal models with few parameters. Various authors have suggested that a general, flexible, yet efficient alternative can be found in randomized placement strategies such as random placement and stochastic placement, which enhances random placement by selectively rejecting templates that are too close to others. In this article we explore several theoretical and practical issues in randomized placement: the size and performance of the resulting template banks; the very general, purely geometric effects of parameter-space boundaries; the use of quasirandom (self-avoiding) number sequences; most important, the implementation of these algorithms in curved signal manifolds with and without the use of a Riemannian signal metric, which may be difficult to obtain. Specifically, we show how the metric can be replaced with a discrete triangulation-based representation of local geometry. We argue that the broad class of randomized placement algorithms offers a promising answer to many search problems, but that the specific choice of a scheme and its implementation details will still need to be fine-tuned separately for each problem.

  12. Quantum stochastic walks: A generalization of classical random walks and quantum walks

    NASA Astrophysics Data System (ADS)

    Whitfield, James D.; Rodríguez-Rosario, César A.; Aspuru-Guzik, Alán

    2010-02-01

    We introduce the quantum stochastic walk (QSW), which determines the evolution of a generalized quantum-mechanical walk on a graph that obeys a quantum stochastic equation of motion. Using an axiomatic approach, we specify the rules for all possible quantum, classical, and quantum-stochastic transitions from a vertex as defined by its connectivity. We show how the family of possible QSWs encompasses both the classical random walk (CRW) and the quantum walk (QW) as special cases but also includes more general probability distributions. As an example, we study the QSW on a line and the glued tree of depth three to observe the behavior of the QW-to-CRW transition.

  13. Quantum stochastic walks: A generalization of classical random walks and quantum walks

    SciTech Connect

    Whitfield, James D.; Rodriguez-Rosario, Cesar A.; Aspuru-Guzik, Alan

    2010-02-15

    We introduce the quantum stochastic walk (QSW), which determines the evolution of a generalized quantum-mechanical walk on a graph that obeys a quantum stochastic equation of motion. Using an axiomatic approach, we specify the rules for all possible quantum, classical, and quantum-stochastic transitions from a vertex as defined by its connectivity. We show how the family of possible QSWs encompasses both the classical random walk (CRW) and the quantum walk (QW) as special cases but also includes more general probability distributions. As an example, we study the QSW on a line and the glued tree of depth three to observe the behavior of the QW-to-CRW transition.

  14. Quantum stochastic walks: A generalization of classical random walks and quantum walks

    SciTech Connect

    Rodriguez-Rosario, Cesar A.; Aspuru-Guzik, Alan; Whitfield, James D.

    2010-02-23

    We introduce the quantum stochastic walk (QSW), which determines the evolution of a generalized quantum-mechanical walk on a graph that obeys a quantum stochastic equation of motion. Using an axiomatic approach, we specify the rules for all possible quantum, classical, and quantum-stochastic transitions from a vertex as defined by its connectivity. We show how the family of possible QSWs encompasses both the classical random walk (CRW) and the quantum walk (QW) as special cases but also includes more general probability distributions. As an example, we study the QSW on a line and the glued tree of depth three to observe the behavior of the QW-to-CRW transition.

  15. A new model for realistic random perturbations of stochastic oscillators

    NASA Astrophysics Data System (ADS)

    Dieci, Luca; Li, Wuchen; Zhou, Haomin

    2016-08-01

    Classical theories predict that solutions of differential equations will leave any neighborhood of a stable limit cycle, if white noise is added to the system. In reality, many engineering systems modeled by second order differential equations, like the van der Pol oscillator, show incredible robustness against noise perturbations, and the perturbed trajectories remain in the neighborhood of a stable limit cycle for all times of practical interest. In this paper, we propose a new model of noise to bridge this apparent discrepancy between theory and practice. Restricting to perturbations from within this new class of noise, we consider stochastic perturbations of second order differential systems that -in the unperturbed case- admit asymptotically stable limit cycles. We show that the perturbed solutions are globally bounded and remain in a tubular neighborhood of the underlying deterministic periodic orbit. We also define stochastic Poincaré map(s), and further derive partial differential equations for the transition density function.

  16. Decoherence in optimized quantum random-walk search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Chao; Bao, Wan-Su; Wang, Xiang; Fu, Xiang-Qun

    2015-08-01

    This paper investigates the effects of decoherence generated by broken-link-type noise in the hypercube on an optimized quantum random-walk search algorithm. When the hypercube occurs with random broken links, the optimized quantum random-walk search algorithm with decoherence is depicted through defining the shift operator which includes the possibility of broken links. For a given database size, we obtain the maximum success rate of the algorithm and the required number of iterations through numerical simulations and analysis when the algorithm is in the presence of decoherence. Then the computational complexity of the algorithm with decoherence is obtained. The results show that the ultimate effect of broken-link-type decoherence on the optimized quantum random-walk search algorithm is negative. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002).

  17. Investigation of stochastic radiation transport methods in random heterogeneous mixtures

    NASA Astrophysics Data System (ADS)

    Reinert, Dustin Ray

    Among the most formidable challenges facing our world is the need for safe, clean, affordable energy sources. Growing concerns over global warming induced climate change and the rising costs of fossil fuels threaten conventional means of electricity production and are driving the current nuclear renaissance. One concept at the forefront of international development efforts is the High Temperature Gas-Cooled Reactor (HTGR). With numerous passive safety features and a meltdown-proof design capable of attaining high thermodynamic efficiencies for electricity generation as well as high temperatures useful for the burgeoning hydrogen economy, the HTGR is an extremely promising technology. Unfortunately, the fundamental understanding of neutron behavior within HTGR fuels lags far behind that of more conventional water-cooled reactors. HTGRs utilize a unique heterogeneous fuel element design consisting of thousands of tiny fissile fuel kernels randomly mixed with a non-fissile graphite matrix. Monte Carlo neutron transport simulations of the HTGR fuel element geometry in its full complexity are infeasible and this has motivated the development of more approximate computational techniques. A series of MATLAB codes was written to perform Monte Carlo simulations within HTGR fuel pebbles to establish a comprehensive understanding of the parameters under which the accuracy of the approximate techniques diminishes. This research identified the accuracy of the chord length sampling method to be a function of the matrix scattering optical thickness, the kernel optical thickness, and the kernel packing density. Two new Monte Carlo methods designed to focus the computational effort upon the parameter conditions shown to contribute most strongly to the overall computational error were implemented and evaluated. An extended memory chord length sampling routine that recalls a neutron's prior material traversals was demonstrated to be effective in fixed source calculations containing

  18. Experimental implementation of the quantum random-walk algorithm

    SciTech Connect

    Du Jiangfeng; Li Hui; Shi Mingjun; Zhou Xianyi; Han Rongdian; Xu Xiaodong; Wu Jihui

    2003-04-01

    The quantum random walk is a possible approach to construct quantum algorithms. Several groups have investigated the quantum random walk and experimental schemes were proposed. In this paper, we present the experimental implementation of the quantum random-walk algorithm on a nuclear-magnetic-resonance quantum computer. We observe that the quantum walk is in sharp contrast to its classical counterpart. In particular, the properties of the quantum walk strongly depends on the quantum entanglement.

  19. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-09-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  20. A Simple Genetic Algorithm for Calibration of Stochastic Rock Discontinuity Networks

    NASA Astrophysics Data System (ADS)

    Jimenez, R.; Jurado-Piña, R.

    2012-07-01

    We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.

  1. Evaluation of a Geothermal Prospect Using a Stochastic Joint Inversion Algorithm

    NASA Astrophysics Data System (ADS)

    Tompson, A. F.; Mellors, R. J.; Ramirez, A.; Dyer, K.; Yang, X.; Trainor-Guitton, W.; Wagoner, J. L.

    2013-12-01

    A stochastic joint inverse algorithm to analyze diverse geophysical and hydrologic data for a geothermal prospect is developed. The purpose is to improve prospect evaluation by finding an ensemble of hydrothermal flow models that are most consistent with multiple types of data sets. The staged approach combines Bayesian inference within a Markov Chain Monte Carlo (MCMC) global search algorithm. The method is highly flexible and capable of accommodating multiple and diverse datasets as a means to maximize the utility of all available data to understand system behavior. An initial application is made at a geothermal prospect located near Superstition Mountain in the western Salton Trough in California. Readily available data include three thermal gradient exploration boreholes, borehole resistivity logs, magnetotelluric and gravity geophysical surveys, surface heat flux measurements, and other nearby hydrologic and geologic information. Initial estimates of uncertainty in structural or parametric characteristics of the prospect are used to drive large numbers of simulations of hydrothermal fluid flow and related geophysical processes using random realizations of the conceptual geothermal system. Uncertainty in the results is represented within a ranked subset of model realizations that best match all available data within a specified norm or tolerance. Statistical (posterior) characteristics of these solutions reflect reductions in the perceived (prior) uncertainties. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-641792.

  2. Characterizing Energy Landscapes of Peptides Using a Combination of Stochastic Algorithms.

    PubMed

    Devaurs, Didier; Molloy, Kevin; Vaisset, Marc; Shehu, Amarda; Siméon, Thierry; Cortés, Juan

    2015-07-01

    Obtaining accurate representations of energy landscapes of biomolecules such as proteins and peptides is central to the study of their physicochemical properties and biological functions. Peptides are particularly interesting, as they exploit structural flexibility to modulate their biological function. Despite their small size, peptide modeling remains challenging due to the complexity of the energy landscape of such highly-flexible dynamic systems. Currently, only stochastic sampling-based methods can efficiently explore the conformational space of a peptide. In this paper, we suggest to combine two such methods to obtain a full characterization of energy landscapes of small yet flexible peptides. First, we propose a simplified version of the classical Basin Hopping algorithm to reveal low-energy regions in the landscape, and thus to identify the corresponding meta-stable structural states of a peptide. Then, we present several variants of a robotics-inspired algorithm, the Transition-based Rapidly-exploring Random Tree, to quickly determine transition path ensembles, as well as transition probabilities between meta-stable states. We demonstrate this combined approach on met-enkephalin.

  3. Topics in Randomized Algorithms for Numerical Linear Algebra

    NASA Astrophysics Data System (ADS)

    Holodnak, John T.

    In this dissertation, we present results for three topics in randomized algorithms. Each topic is related to random sampling. We begin by studying a randomized algorithm for matrix multiplication that randomly samples outer products. We show that if a set of deterministic conditions is satisfied, then the algorithm can compute the exact product. In addition, we show probabilistic bounds on the two norm relative error of the algorithm. two norm relative error of the algorithm. In the second part, we discuss the sensitivity of leverage scores to perturbations. Leverage scores are scalar quantities that give a notion of importance to the rows of a matrix. They are used as sampling probabilities in many randomized algorithms. We show bounds on the difference between the leverage scores of a matrix and a perturbation of the matrix. In the last part, we approximate functions over an active subspace of parameters. To identify the active subspace, we apply an algorithm that relies on a random sampling scheme. We show bounds on the accuracy of the active subspace identification algorithm and construct an approximation to a function with 3556 parameters using a ten-dimensional active subspace.

  4. Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models

    NASA Technical Reports Server (NTRS)

    Mjoisness, Eric; Castano, Rebecca; Gray, Alexander

    1999-01-01

    We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.

  5. Representation of nonlinear random transformations by non-gaussian stochastic neural networks.

    PubMed

    Turchetti, Claudio; Crippa, Paolo; Pirani, Massimiliano; Biagetti, Giorgio

    2008-06-01

    The learning capability of neural networks is equivalent to modeling physical events that occur in the real environment. Several early works have demonstrated that neural networks belonging to some classes are universal approximators of input-output deterministic functions. Recent works extend the ability of neural networks in approximating random functions using a class of networks named stochastic neural networks (SNN). In the language of system theory, the approximation of both deterministic and stochastic functions falls within the identification of nonlinear no-memory systems. However, all the results presented so far are restricted to the case of Gaussian stochastic processes (SPs) only, or to linear transformations that guarantee this property. This paper aims at investigating the ability of stochastic neural networks to approximate nonlinear input-output random transformations, thus widening the range of applicability of these networks to nonlinear systems with memory. In particular, this study shows that networks belonging to a class named non-Gaussian stochastic approximate identity neural networks (SAINNs) are capable of approximating the solutions of large classes of nonlinear random ordinary differential transformations. The effectiveness of this approach is demonstrated and discussed by some application examples.

  6. ANASA-a stochastic reinforcement algorithm for real-valued neural computation.

    PubMed

    Vasilakos, A V; Loukas, N H

    1996-01-01

    This paper introduces ANASA (adaptive neural algorithm of stochastic activation), a new, efficient, reinforcement learning algorithm for training neural units and networks with continuous output. The proposed method employs concepts, found in self-organizing neural networks theory and in reinforcement estimator learning algorithms, to extract and exploit information relative to previous input pattern presentations. In addition, it uses an adaptive learning rate function and a self-adjusting stochastic activation to accelerate the learning process. A form of optimal performance of the ANASA algorithm is proved (under a set of assumptions) via strong convergence theorems and concepts. Experimentally, the new algorithm yields results, which are superior compared to existing associative reinforcement learning methods in terms of accuracy and convergence rates. The rapid convergence rate of ANASA is demonstrated in a simple learning task, when it is used as a single neural unit, and in mathematical function modeling problems, when it is used to train various multilayered neural networks.

  7. Modeling Signal Transduction Networks: A comparison of two Stochastic Kinetic Simulation Algorithms

    SciTech Connect

    Pettigrew, Michel F.; Resat, Haluk

    2005-09-15

    Simulations of a scalable four compartment reaction model based on the well known epidermal growth factor receptor (EGFR) signal transduction system are used to compare two stochastic algorithms ? StochSim and the Gibson-Gillespie. It is concluded that the Gibson-Gillespie is the algorithm of choice for most realistic cases with the possible exception of signal transduction networks characterized by a moderate number (< 100) of complex types, each with a very small population, but with a high degree of connectivity amongst the complex types. Keywords: Signal transduction networks, Stochastic simulation, StochSim, Gillespie

  8. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  9. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  10. Robust stabilisation of 2D state-delayed stochastic systems with randomly occurring uncertainties and nonlinearities

    NASA Astrophysics Data System (ADS)

    Duan, Zhaoxia; Xiang, Zhengrong; Karimi, Hamid Reza

    2014-07-01

    This paper is concerned with the state feedback control problem for a class of two-dimensional (2D) discrete-time stochastic systems with time-delays, randomly occurring uncertainties and nonlinearities. Both the sector-like nonlinearities and the norm-bounded uncertainties enter into the system in random ways, and such randomly occurring uncertainties and nonlinearities obey certain mutually uncorrelated Bernoulli random binary distribution laws. Sufficient computationally tractable linear matrix inequality-based conditions are established for the 2D nonlinear stochastic time-delay systems to be asymptotically stable in the mean-square sense, and then the explicit expression of the desired controller gains is derived. An illustrative example is provided to show the usefulness and effectiveness of the proposed method.

  11. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2004-04-26

    Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.

  12. Probabilistic tracking control for non-Gaussian stochastic process using novel iterative learning algorithms

    NASA Astrophysics Data System (ADS)

    Yi, Yang; Sun, ChangYin; Guo, Lei

    2013-07-01

    A new generalised iterative learning algorithm is presented for complex dynamic non-Gaussian stochastic processes. After designed neural networks are used to approximate the output probability density function (PDF) of the stochastic system in the repetitive processes or the batch processes, the complex probabilistic tracking control to the output PDF is simplified into a parameter tuning problem between two adjacent repetitive processes. Under this framework, this article studies a novel model free iterative learning control problem and proposes a convex optimisation algorithm based on a set of designed linear matrix inequalities and L 1 optimisation index. It is noted that such an algorithm can improve the tracking performance and robustness for the closed-loop PDF control. A simulated example is given, which effectively demonstrates the use of the proposed control algorithm.

  13. Random Renormalization Group Operators Applied to Stochastic Dynamics

    NASA Astrophysics Data System (ADS)

    O'Malley, Daniel; Cushman, John H.

    2012-11-01

    Let X( t) be a fixed point the renormalization group operator (RGO), R p, r X( t)= X( rt)/ r p . Scaling laws for the probability density, mean first passage times, finite-size Lyapunov exponents of such fixed points are reviewed in anticipation of more general results. A generalized RGO, {R}_{P,n} where P is a random variable, is introduced. Scaling laws associated with these random RGOs (RRGOs) are demonstrated numerically and applied to subdiffusion in bacterial cytoplasm and a process modeling the transition from subdiffusion to classical diffusion. The scaling laws for the RRGO are not simple power laws, but are a weighted average of power laws. The weighting used in the scaling laws can be determined adaptively via Bayes' theorem.

  14. A stochastic model of randomly accelerated walkers for human mobility

    NASA Astrophysics Data System (ADS)

    Gallotti, Riccardo; Bazzani, Armando; Rambaldi, Sandro; Barthelemy, Marc

    2016-08-01

    Recent studies of human mobility largely focus on displacements patterns and power law fits of empirical long-tailed distributions of distances are usually associated to scale-free superdiffusive random walks called Lévy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. Here we show, on the basis of a data set describing the trajectories of 780,000 private vehicles in Italy, that the Lévy flight model cannot explain the behaviour of travel times and speeds. We therefore introduce a class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of mobility.

  15. A stochastic model of randomly accelerated walkers for human mobility.

    PubMed

    Gallotti, Riccardo; Bazzani, Armando; Rambaldi, Sandro; Barthelemy, Marc

    2016-01-01

    Recent studies of human mobility largely focus on displacements patterns and power law fits of empirical long-tailed distributions of distances are usually associated to scale-free superdiffusive random walks called Lévy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. Here we show, on the basis of a data set describing the trajectories of 780,000 private vehicles in Italy, that the Lévy flight model cannot explain the behaviour of travel times and speeds. We therefore introduce a class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of mobility. PMID:27573984

  16. A stochastic model of randomly accelerated walkers for human mobility

    PubMed Central

    Gallotti, Riccardo; Bazzani, Armando; Rambaldi, Sandro; Barthelemy, Marc

    2016-01-01

    Recent studies of human mobility largely focus on displacements patterns and power law fits of empirical long-tailed distributions of distances are usually associated to scale-free superdiffusive random walks called Lévy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. Here we show, on the basis of a data set describing the trajectories of 780,000 private vehicles in Italy, that the Lévy flight model cannot explain the behaviour of travel times and speeds. We therefore introduce a class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of mobility. PMID:27573984

  17. State estimation of stochastic non-linear hybrid dynamic system using an interacting multiple model algorithm.

    PubMed

    Elenchezhiyan, M; Prakash, J

    2015-09-01

    In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme.

  18. Random attractors for stochastic 2D-Navier-Stokes equations in some unbounded domains

    NASA Astrophysics Data System (ADS)

    Brzeźniak, Z.; Caraballo, T.; Langa, J. A.; Li, Y.; Łukaszewicz, G.; Real, J.

    We show that the stochastic flow generated by the 2-dimensional Stochastic Navier-Stokes equations with rough noise on a Poincaré-like domain has a unique random attractor. One of the technical problems associated with the rough noise is overcomed by the use of the corresponding Cameron-Martin (or reproducing kernel Hilbert) space. Our results complement the result by Brzeźniak and Li (2006) [10] who showed that the corresponding flow is asymptotically compact and also generalize Caraballo et al. (2006) [12] who proved existence of a unique attractor for the time-dependent deterministic Navier-Stokes equations.

  19. Recursive state estimation for discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks

    NASA Astrophysics Data System (ADS)

    Ding, Derui; Shen, Yuxuan; Song, Yan; Wang, Yongxiong

    2016-07-01

    This paper is concerned with the state estimation problem for a class of discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks. The stochastic nonlinearity described by statistical means which covers several classes of well-studied nonlinearities as special cases is taken into discussion. The randomly occurring deception attacks are modelled by a set of random variables obeying Bernoulli distributions with given probabilities. The purpose of the addressed state estimation problem is to design an estimator with hope to minimize the upper bound for estimation error covariance at each sampling instant. Such an upper bound is minimized by properly designing the estimator gain. The proposed estimation scheme in the form of two Riccati-like difference equations is of a recursive form. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed scheme.

  20. On a Stochastic Failure Model under Random Shocks

    NASA Astrophysics Data System (ADS)

    Cha, Ji Hwan

    2013-02-01

    In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.

  1. THE LOSS OF ACCURACY OF STOCHASTIC COLLOCATION METHOD IN SOLVING NONLINEAR DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Tran, Hoang A; Trenchea, Catalin S

    2013-01-01

    n this paper we show how stochastic collocation method (SCM) could fail to con- verge for nonlinear differential equations with random coefficients. First, we consider Navier-Stokes equation with uncertain viscosity and derive error estimates for stochastic collocation discretization. Our analysis gives some indicators on how the nonlinearity negatively affects the accuracy of the method. The stochastic collocation method is then applied to noisy Lorenz system. Simulation re- sults demonstrate that the solution of a nonlinear equation could be highly irregular on the random data and in such cases, stochastic collocation method cannot capture the correct solution.

  2. Hybrid discrete/continuum algorithms for stochastic reaction networks

    SciTech Connect

    Safta, Cosmin Sargsyan, Khachik Debusschere, Bert Najm, Habib N.

    2015-01-15

    Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker–Planck equation. The Fokker–Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components. The numerical construction at the interface between the discrete and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. The performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.

  3. Hybrid discrete/continuum algorithms for stochastic reaction networks

    DOE PAGESBeta

    Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; Najm, Habib N.

    2014-10-22

    Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discretemore » and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.« less

  4. Hybrid discrete/continuum algorithms for stochastic reaction networks

    NASA Astrophysics Data System (ADS)

    Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; Najm, Habib N.

    2015-01-01

    Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components. The numerical construction at the interface between the discrete and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. The performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.

  5. Hybrid discrete/continuum algorithms for stochastic reaction networks

    SciTech Connect

    Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; Najm, Habib N.

    2014-10-22

    Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discrete and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.

  6. Cycles, randomness, and transport from chaotic dynamics to stochastic processes.

    PubMed

    Gaspard, Pierre

    2015-09-01

    An overview of advances at the frontier between dynamical systems theory and nonequilibrium statistical mechanics is given. Sensitivity to initial conditions is a mechanism at the origin of dynamical randomness-alias temporal disorder-in deterministic dynamical systems. In spatially extended systems, sustaining transport processes, such as diffusion, relationships can be established between the characteristic quantities of dynamical chaos and the transport coefficients, bringing new insight into the second law of thermodynamics. With methods from dynamical systems theory, the microscopic time-reversal symmetry can be shown to be broken at the statistical level of description in nonequilibrium systems. In this way, the thermodynamic entropy production turns out to be related to temporal disorder and its time asymmetry away from equilibrium. PMID:26428559

  7. A stochastic analysis of steady and transient heat conduction in random media using a homogenization approach

    SciTech Connect

    Zhijie Xu

    2014-07-01

    We present a new stochastic analysis for steady and transient one-dimensional heat conduction problem based on the homogenization approach. Thermal conductivity is assumed to be a random field K consisting of random variables of a total number N. Both steady and transient solutions T are expressed in terms of the homogenized solution (symbol) and its spatial derivatives (equation), where homogenized solution (symbol) is obtained by solving the homogenized equation with effective thermal conductivity. Both mean and variance of stochastic solutions can be obtained analytically for K field consisting of independent identically distributed (i.i.d) random variables. The mean and variance of T are shown to be dependent only on the mean and variance of these i.i.d variables, not the particular form of probability distribution function of i.i.d variables. Variance of temperature field T can be separated into two contributions: the ensemble contribution (through the homogenized temperature (symbol)); and the configurational contribution (through the random variable Ln(x)Ln(x)). The configurational contribution is shown to be proportional to the local gradient of (symbol). Large uncertainty of T field was found at locations with large gradient of (symbol) due to the significant configurational contributions at these locations. Numerical simulations were implemented based on a direct Monte Carlo method and good agreement is obtained between numerical Monte Carlo results and the proposed stochastic analysis.

  8. The Study of Randomized Visual Saliency Detection Algorithm

    PubMed Central

    Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results. PMID:24382980

  9. The study of randomized visual saliency detection algorithm.

    PubMed

    Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results.

  10. A stochastic model and Monte Carlo algorithm for fluctuation-induced H{sub 2} formation on the surface of interstellar dust grains

    SciTech Connect

    Sabelfeld, K.K.

    2015-09-01

    A stochastic algorithm for simulation of fluctuation-induced kinetics of H{sub 2} formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H{sub 2} from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.

  11. A stochastic model and Monte Carlo algorithm for fluctuation-induced H2 formation on the surface of interstellar dust grains

    NASA Astrophysics Data System (ADS)

    Sabelfeld, K. K.

    2015-09-01

    A stochastic algorithm for simulation of fluctuation-induced kinetics of H2 formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H2 from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.

  12. On the Wiener-Masani algorithm for finding the generating function of multivariate stochastic processes

    NASA Technical Reports Server (NTRS)

    Miamee, A. G.

    1988-01-01

    The algorithms developed by Wiener and Masani (1957 and 1958) and Masani (1960) for the characterization of a class of multivariate stationary stochastic processes are investigated analytically. The algorithms permit the determination of (1) the generating function, (2) the prediction-error matrix, and (3) an autoregressive representation of the linear least-squares predictor. A number of theorems and lemmas are proved, and it is shown that the range of validity of the algorithms can be extended significantly beyond that given by Wiener and Masani.

  13. Stochastic diffusion and Kolmogorov entropy in regular and random Hamiltonians

    SciTech Connect

    Isichenko, M.B. . Inst. for Fusion Studies Kurchatov Inst. of Atomic Energy, Moscow ); Horton, W. . Inst. for Fusion Studies); Kim, D.E.; Heo, E.G.; Choi, D.I. )

    1992-05-01

    The scalings of the E x B turbulent diffusion coefficient D and the Kolmogorov entropy K with the potential amplitude {phi} {sup {approximately}} of the fluctuation are studied using the geometrical analysis of closed and extended particle orbits for several types of drift Hamiltonians. The high-amplitude scalings , D {proportional to} {phi} {sup {approximately} 2} or {phi} {sup {approximately} 0} and K {proportional to} log {phi} {sup {approximately}}, are shown to arise from different forms of a periodic (four-wave) Hamiltonian {phi}{sup {approximately}} (x,y,t), thereby explaining the controversy in earlier numerical results. For a quasi-random (six-wave) Hamiltonian numerical data for the diffusion D {proportional to} {phi} {sup {approximately} 0.92 {plus minus} 0.04} and the Kolmogorov entropy K {proportional to} {phi} {sup {approximately} 0.56 {plus minus} 0.17} are presented and compared with the percolation theory predictions D {sub p} {proportional to} {phi} {sup {approximately} 0.7}, K {sub p} {proportional to} {phi} {sup {approximately} 0.5}. To study the turbulent diffusion in a general form of Hamiltonian, a new approach of the series expansion of the Lagrangian velocity correlation function is proposed and discussed.

  14. Stochastic diffusion and Kolmogorov entropy in regular and random Hamiltonians

    SciTech Connect

    Isichenko, M.B. |; Horton, W.; Kim, D.E.; Heo, E.G.; Choi, D.I.

    1992-05-01

    The scalings of the E x B turbulent diffusion coefficient D and the Kolmogorov entropy K with the potential amplitude {phi} {sup {approximately}} of the fluctuation are studied using the geometrical analysis of closed and extended particle orbits for several types of drift Hamiltonians. The high-amplitude scalings , D {proportional_to} {phi} {sup {approximately} 2} or {phi} {sup {approximately} 0} and K {proportional_to} log {phi} {sup {approximately}}, are shown to arise from different forms of a periodic (four-wave) Hamiltonian {phi}{sup {approximately}} (x,y,t), thereby explaining the controversy in earlier numerical results. For a quasi-random (six-wave) Hamiltonian numerical data for the diffusion D {proportional_to} {phi} {sup {approximately} 0.92 {plus_minus} 0.04} and the Kolmogorov entropy K {proportional_to} {phi} {sup {approximately} 0.56 {plus_minus} 0.17} are presented and compared with the percolation theory predictions D {sub p} {proportional_to} {phi} {sup {approximately} 0.7}, K {sub p} {proportional_to} {phi} {sup {approximately} 0.5}. To study the turbulent diffusion in a general form of Hamiltonian, a new approach of the series expansion of the Lagrangian velocity correlation function is proposed and discussed.

  15. Random transitions described by the stochastic Smoluchowski-Poisson system and by the stochastic Keller-Segel model.

    PubMed

    Chavanis, P H; Delfini, L

    2014-03-01

    We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010)]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean field approximation. PMID

  16. Random transitions described by the stochastic Smoluchowski-Poisson system and by the stochastic Keller-Segel model.

    PubMed

    Chavanis, P H; Delfini, L

    2014-03-01

    We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010)]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean field approximation.

  17. Nuclear space-valued stochastic differential equations driven by Poisson random measures

    SciTech Connect

    Xiong, J.

    1992-01-01

    The thesis is devoted primarily to the study of stochastic differential equations on duals of nuclear spaces driven by Poisson random measures. The existence of a weak solution is obtained by the Galerkin method and the uniqueness is established by implementing the Yamada-Watanabe argument in the present setup. When the magnitudes of the driving terms are small enough and the Poisson streams occur frequently enough, it is proved that the stochastic differential equations mentioned above can be approximated by diffusion equations. Finally, the author considers a system of interacting stochastic differential equations driven by Poisson random measures. Let (X[sup n][sub i](t), [center dot][center dot][center dot], X[sup n][sub n](t)) be the solution of this system and consider the empirical measures [zeta]n([omega],B) [identical to] (1/n) (sum of j=1 to n) [delta]x[sup n][sub j]([center dot],[omega])(B) (n[>=]1). It is provided that [zeta][sub n] converges in distribution to a non-random measure which is the unique solution of a McKean-Vlasov equation. The above problems are motivated by applications to neurophysiology, in particular, to the fluctuation of voltage potentials of spatially distributed neurons and to the study of asymptotic behavior of large systems of interacting neurons.

  18. Steady state and mean recurrence time for random walks on stochastic temporal networks.

    PubMed

    Speidel, Leo; Lambiotte, Renaud; Aihara, Kazuyuki; Masuda, Naoki

    2015-01-01

    Random walks are basic diffusion processes on networks and have applications in, for example, searching, navigation, ranking, and community detection. Recent recognition of the importance of temporal aspects on networks spurred studies of random walks on temporal networks. Here we theoretically study two types of event-driven random walks on a stochastic temporal network model that produces arbitrary distributions of interevent times. In the so-called active random walk, the interevent time is reinitialized on all links upon each movement of the walker. In the so-called passive random walk, the interevent time is reinitialized only on the link that has been used the last time, and it is a type of correlated random walk. We find that the steady state is always the uniform density for the passive random walk. In contrast, for the active random walk, it increases or decreases with the node's degree depending on the distribution of interevent times. The mean recurrence time of a node is inversely proportional to the degree for both active and passive random walks. Furthermore, the mean recurrence time does or does not depend on the distribution of interevent times for the active and passive random walks, respectively. PMID:25679656

  19. A comparison of computational efficiencies of stochastic algorithms in terms of two infection models.

    PubMed

    Banks, H Thomas; Hu, Shuhua; Joyner, Michele; Broido, Anna; Canter, Brandi; Gayvert, Kaitlyn; Link, Kathryn

    2012-07-01

    In this paper, we investigate three particular algorithms: a stochastic simulation algorithm (SSA), and explicit and implicit tau-leaping algorithms. To compare these methods, we used them to analyze two infection models: a Vancomycin-resistant enterococcus (VRE) infection model at the population level, and a Human Immunodeficiency Virus (HIV) within host infection model. While the first has a low species count and few transitions, the second is more complex with a comparable number of species involved. The relative efficiency of each algorithm is determined based on computational time and degree of precision required. The numerical results suggest that all three algorithms have the similar computational efficiency for the simpler VRE model, and the SSA is the best choice due to its simplicity and accuracy. In addition, we have found that with the larger and more complex HIV model, implementation and modification of tau-Leaping methods are preferred.

  20. Zero-one-only process: A correlated random walk with a stochastic ratchet

    NASA Astrophysics Data System (ADS)

    Baek, Seung Ki; Jeong, Hawoong; Son, Seung-Woo; Kim, Beom Jun

    2014-08-01

    The investigation of random walks is central to a variety of stochastic processes in physics, chemistry and biology. To describe a transport phenomenon, we study a variant of the one-dimensional persistent random walk, which we call a zero-one-only process. It makes a step in the same direction as the previous step with probability p, and stops to change the direction with 1 - p. By using the generating-function method, we calculate its characteristic quantities such as the statistical moments and probability of the first return.

  1. Markov random-field-based anomaly screening algorithm

    NASA Astrophysics Data System (ADS)

    Bello, Martin G.

    1995-06-01

    A novel anomaly screening algorithm is described which makes use of a regression diagnostic associated with the fitting of Markov Random Field (MRF) models. This regression diagnostic quantifies the extent to which a given neighborhood of pixels is atypical, relative to local background characteristics. The screening algorithm consists first in the calculation of an MRF-based anomoly statistic values. Next, 'blob' features, such as pixel count and maximal pixel intensity are calculated, and ranked over the image, in order to 'filter' the blobs to some final subset of most likely candidates. Receiver operating characteristics obtained from applying the above described screening algorithm to the detection of minelike targets in high- and low-frequency side-scan sonar imagery are presented together with results obtained from other screening algorithms for comparison, demonstrating performance comparable to trained human operators. In addition, real-time implementation considerations associated with each algorithmic component of the described procedure are identified.

  2. Measuring Edge Importance: A Quantitative Analysis of the Stochastic Shielding Approximation for Random Processes on Graphs

    PubMed Central

    2014-01-01

    Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin–Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán’s approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process. PMID:24742077

  3. The stochastic evolution of a protocell: the Gillespie algorithm in a dynamically varying volume.

    PubMed

    Carletti, T; Filisetti, A

    2012-01-01

    We propose an improvement of the Gillespie algorithm allowing us to study the time evolution of an ensemble of chemical reactions occurring in a varying volume, whose growth is directly related to the amount of some specific molecules, belonging to the reactions set. This allows us to study the stochastic evolution of a protocell, whose volume increases because of the production of container molecules. Several protocell models are considered and compared with the deterministic models. PMID:22536297

  4. On stochastic FEM based computational homogenization of magneto-active heterogeneous materials with random microstructure

    NASA Astrophysics Data System (ADS)

    Pivovarov, Dmytro; Steinmann, Paul

    2016-09-01

    In the current work we apply the stochastic version of the FEM to the homogenization of magneto-elastic heterogeneous materials with random microstructure. The main aim of this study is to capture accurately the discontinuities appearing at matrix-inclusion interfaces. We demonstrate and compare three different techniques proposed in the literature for the purely mechanical problem, i.e. global, local and enriched stochastic basis functions. Moreover, we demonstrate the implementation of the isoparametric concept in the enlarged physical-stochastic product space. The Gauss integration rule in this multidimensional space is discussed. In order to design a realistic stochastic Representative Volume Element we analyze actual scans obtained by electron microscopy and provide numerical studies of the micro particle distribution. The SFEM framework described in our previous work (Pivovarov and Steinmann in Comput Mech 57(1): 123-147, 2016) is extended to the case of the magneto-elastic materials. To this end, the magneto-elastic energy function is used, and the corresponding hyper-tensors of the magneto-elastic problem are introduced. In order to estimate the methods' accuracy we performed a set of simulations for elastic and magneto-elastic problems using three different SFEM modifications. All results are compared with "brute-force" Monte-Carlo simulations used as reference solution.

  5. Simulation of multicorrelated random processes using the FFT algorithm

    NASA Technical Reports Server (NTRS)

    Wittig, L. E.; Sinha, A. K.

    1975-01-01

    A technique for the digital simulation of multicorrelated Gaussian random processes is described. This technique is based upon generating discrete frequency functions which correspond to the Fourier transform of the desired random processes, and then using the fast Fourier transform (FFT) algorithm to obtain the actual random processes. The main advantage of this method of simulation over other methods is computation time; it appears to be more than an order of magnitude faster than present methods of simulation. One of the main uses of multicorrelated simulated random processes is in solving nonlinear random vibration problems by numerical integration of the governing differential equations. The response of a nonlinear string to a distributed noise input is presented as an example.

  6. Random transitions described by the stochastic Smoluchowski-Poisson system and by the stochastic Keller-Segel model

    NASA Astrophysics Data System (ADS)

    Chavanis, P. H.; Delfini, L.

    2014-03-01

    We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010), 10.1103/PhysRevE.81.051103]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N ≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean

  7. Monotonic continuous-time random walks with drift and stochastic reset events

    NASA Astrophysics Data System (ADS)

    Montero, Miquel; Villarroel, Javier

    2013-01-01

    In this paper we consider a stochastic process that may experience random reset events which suddenly bring the system to the starting value and analyze the relevant statistical magnitudes. We focus our attention on monotonic continuous-time random walks with a constant drift: The process increases between the reset events, either by the effect of the random jumps, or by the action of the deterministic drift. As a result of all these combined factors interesting properties emerge, like the existence (for any drift strength) of a stationary transition probability density function, or the faculty of the model to reproduce power-law-like behavior. General formulas for two extreme statistics, the survival probability, and the mean exit time are also derived. To corroborate in an independent way the results of the paper, Monte Carlo methods were used. These numerical estimations are in full agreement with the analytical predictions.

  8. A solution algorithm for the fluid dynamic equations based on a stochastic model for molecular motion

    SciTech Connect

    Jenny, Patrick Torrilhon, Manuel; Heinz, Stefan

    2010-02-20

    In this paper, a stochastic model is presented to simulate the flow of gases, which are not in thermodynamic equilibrium, like in rarefied or micro situations. For the interaction of a particle with others, statistical moments of the local ensemble have to be evaluated, but unlike in molecular dynamics simulations or DSMC, no collisions between computational particles are considered. In addition, a novel integration technique allows for time steps independent of the stochastic time scale. The stochastic model represents a Fokker-Planck equation in the kinetic description, which can be viewed as an approximation to the Boltzmann equation. This allows for a rigorous investigation of the relation between the new model and classical fluid and kinetic equations. The fluid dynamic equations of Navier-Stokes and Fourier are fully recovered for small relaxation times, while for larger values the new model extents into the kinetic regime. Numerical studies demonstrate that the stochastic model is consistent with Navier-Stokes in that limit, but also that the results become significantly different, if the conditions for equilibrium are invalid. The application to the Knudsen paradox demonstrates the correctness and relevance of this development, and comparisons with existing kinetic equations and standard solution algorithms reveal its advantages. Moreover, results of a test case with geometrically complex boundaries are presented.

  9. The Separatrix Algorithm for Synthesis and Analysis of Stochastic Simulations with Applications in Disease Modeling

    PubMed Central

    Klein, Daniel J.; Baym, Michael; Eckhoff, Philip

    2014-01-01

    Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by ), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which “success” is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria. PMID:25078087

  10. Stochastic resonance in a fractional harmonic oscillator subject to random mass and signal-modulated noise

    NASA Astrophysics Data System (ADS)

    Guo, Feng; Zhu, Cheng-Yin; Cheng, Xiao-Feng; Li, Heng

    2016-10-01

    Stochastic resonance in a fractional harmonic oscillator with random mass and signal-modulated noise is investigated. Applying linear system theory and the characteristics of the noises, the analysis expression of the mean output-amplitude-gain (OAG) is obtained. It is shown that the OAG varies non-monotonically with the increase of the intensity of the multiplicative dichotomous noise, with the increase of the frequency of the driving force, as well as with the increase of the system frequency. In addition, the OAG is a non-monotonic function of the system friction coefficient, as a function of the viscous damping coefficient, as a function of the fractional exponent.

  11. Dynamics of the stochastic Leslie-Gower predator-prey system with randomized intrinsic growth rate

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli; Yuan, Sanling

    2016-11-01

    This paper investigates the stochastic Leslie-Gower predator-prey system with randomized intrinsic growth rate. Existence of a unique global positive solution is proved firstly. Then we obtain the sufficient conditions for permanence in mean and almost sure extinction of the system. Furthermore, the stationary distribution is derived based on the positive equilibrium of the deterministic model, which shows the population is not only persistent but also convergent by time average under some assumptions. Finally, we illustrate our conclusions through two examples.

  12. Theory of weak scattering of stochastic electromagnetic fields from deterministic and random media

    SciTech Connect

    Tong Zhisong; Korotkova, Olga

    2010-09-15

    The theory of scattering of scalar stochastic fields from deterministic and random media is generalized to the electromagnetic domain under the first-order Born approximation. The analysis allows for determining the changes in spectrum, coherence, and polarization of electromagnetic fields produced on their propagation from the source to the scattering volume, interaction with the scatterer, and propagation from the scatterer to the far field. An example of scattering of a field produced by a {delta}-correlated partially polarized source and scattered from a {delta}-correlated medium is provided.

  13. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2013-04-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  14. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    NASA Astrophysics Data System (ADS)

    Sochi, Taha

    2016-09-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.

  15. Delay-distribution-dependent state estimation for discrete-time stochastic neural networks with random delay.

    PubMed

    Bao, Haibo; Cao, Jinde

    2011-01-01

    This paper is concerned with the state estimation problem for a class of discrete-time stochastic neural networks (DSNNs) with random delays. The effect of both variation range and distribution probability of the time delay are taken into account in the proposed approach. The stochastic disturbances are described in terms of a Brownian motion and the time-varying delay is characterized by introducing a Bernoulli stochastic variable. By employing a Lyapunov-Krasovskii functional, sufficient delay-distribution-dependent conditions are established in terms of linear matrix inequalities (LMIs) that guarantee the existence of the state estimator which can be checked readily by the Matlab toolbox. The main feature of the results obtained in this paper is that they are dependent on not only the bound but also the distribution probability of the time delay, and we obtain a larger allowance variation range of the delay, hence our results are less conservative than the traditional delay-independent ones. One example is given to illustrate the effectiveness of the proposed result. PMID:20950998

  16. Stochastic simulation for the propagation of high-frequency acoustic waves through a random velocity field

    NASA Astrophysics Data System (ADS)

    Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.

    2012-05-01

    In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.

  17. Stochastic simulation for the propagation of high-frequency acoustic waves through a random velocity field

    SciTech Connect

    Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.

    2012-05-17

    In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.

  18. Simulation of ammonium and chromium transport in porous media using coupling scheme of a numerical algorithm and a stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Schüttrumpf, Holger; Köngeter, Jürgen; Becker, Torsten; Palani, Sundarambal

    2009-01-01

    The migration of the species of chromium and ammonium in groundwater and their effective remediation depend on the various hydro-geological characteristics of the system. The computational modeling of the reactive transport problems is one of the most preferred tools for field engineers in groundwater studies to make decision in pollution abatement. The analytical models are less modular in nature with low computational demand where the modification is difficult during the formulation of different reactive systems. Numerical models provide more detailed information with high computational demand. Coupling of linear partial differential Equations (PDE) for the transport step with a non-linear system of ordinary differential equations (ODE) for the reactive step is the usual mode of solving a kinetically controlled reactive transport equation. This assumption is not appropriate for a system with low concentration of species such as chromium. Such reaction systems can be simulated using a stochastic algorithm. In this paper, a finite difference scheme coupled with a stochastic algorithm for the simulation of the transport of ammonium and chromium in subsurface media has been detailed.

  19. Application of stochastic weighted algorithms to a multidimensional silica particle model

    SciTech Connect

    Menz, William J.; Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associated majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.

  20. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  1. A method to dynamic stochastic multicriteria decision making with log-normally distributed random variables.

    PubMed

    Wang, Xin-Fan; Wang, Jian-Qiang; Deng, Sheng-Yue

    2013-01-01

    We investigate the dynamic stochastic multicriteria decision making (SMCDM) problems, in which the criterion values take the form of log-normally distributed random variables, and the argument information is collected from different periods. We propose two new geometric aggregation operators, such as the log-normal distribution weighted geometric (LNDWG) operator and the dynamic log-normal distribution weighted geometric (DLNDWG) operator, and develop a method for dynamic SMCDM with log-normally distributed random variables. This method uses the DLNDWG operator and the LNDWG operator to aggregate the log-normally distributed criterion values, utilizes the entropy model of Shannon to generate the time weight vector, and utilizes the expectation values and variances of log-normal distributions to rank the alternatives and select the best one. Finally, an example is given to illustrate the feasibility and effectiveness of this developed method.

  2. A Bloch decomposition-based stochastic Galerkin method for quantum dynamics with a random external potential

    NASA Astrophysics Data System (ADS)

    Wu, Zhizhang; Huang, Zhongyi

    2016-07-01

    In this paper, we consider the numerical solution of the one-dimensional Schrödinger equation with a periodic lattice potential and a random external potential. This is an important model in solid state physics where the randomness results from complicated phenomena that are not exactly known. Here we generalize the Bloch decomposition-based time-splitting pseudospectral method to the stochastic setting using the generalized polynomial chaos with a Galerkin procedure so that the main effects of dispersion and periodic potential are still computed together. We prove that our method is unconditionally stable and numerical examples show that it has other nice properties and is more efficient than the traditional method. Finally, we give some numerical evidence for the well-known phenomenon of Anderson localization.

  3. Fractal and stochastic geometry inference for breast cancer: a case study with random fractal models and Quermass-interaction process.

    PubMed

    Hermann, Philipp; Mrkvička, Tomáš; Mattfeldt, Torsten; Minárová, Mária; Helisová, Kateřina; Nicolis, Orietta; Wartner, Fabian; Stehlík, Milan

    2015-08-15

    Fractals are models of natural processes with many applications in medicine. The recent studies in medicine show that fractals can be applied for cancer detection and the description of pathological architecture of tumors. This fact is not surprising, as due to the irregular structure, cancerous cells can be interpreted as fractals. Inspired by Sierpinski carpet, we introduce a flexible parametric model of random carpets. Randomization is introduced by usage of binomial random variables. We provide an algorithm for estimation of parameters of the model and illustrate theoretical and practical issues in generation of Sierpinski gaskets and Hausdorff measure calculations. Stochastic geometry models can also serve as models for binary cancer images. Recently, a Boolean model was applied on the 200 images of mammary cancer tissue and 200 images of mastopathic tissue. Here, we describe the Quermass-interaction process, which can handle much more variations in the cancer data, and we apply it to the images. It was found out that mastopathic tissue deviates significantly stronger from Quermass-interaction process, which describes interactions among particles, than mammary cancer tissue does. The Quermass-interaction process serves as a model describing the tissue, which structure is broken to a certain level. However, random fractal model fits well for mastopathic tissue. We provide a novel discrimination method between mastopathic and mammary cancer tissue on the basis of complex wavelet-based self-similarity measure with classification rates more than 80%. Such similarity measure relates to Hurst exponent and fractional Brownian motions. The R package FractalParameterEstimation is developed and introduced in the paper.

  4. A stochastic neuronal model predicts random search behaviors at multiple spatial scales in C. elegans

    PubMed Central

    Roberts, William M; Augustine, Steven B; Lawton, Kristy J; Lindsay, Theodore H; Thiele, Tod R; Izquierdo, Eduardo J; Faumont, Serge; Lindsay, Rebecca A; Britton, Matthew Cale; Pokala, Navin; Bargmann, Cornelia I; Lockery, Shawn R

    2016-01-01

    Random search is a behavioral strategy used by organisms from bacteria to humans to locate food that is randomly distributed and undetectable at a distance. We investigated this behavior in the nematode Caenorhabditis elegans, an organism with a small, well-described nervous system. Here we formulate a mathematical model of random search abstracted from the C. elegans connectome and fit to a large-scale kinematic analysis of C. elegans behavior at submicron resolution. The model predicts behavioral effects of neuronal ablations and genetic perturbations, as well as unexpected aspects of wild type behavior. The predictive success of the model indicates that random search in C. elegans can be understood in terms of a neuronal flip-flop circuit involving reciprocal inhibition between two populations of stochastic neurons. Our findings establish a unified theoretical framework for understanding C. elegans locomotion and a testable neuronal model of random search that can be applied to other organisms. DOI: http://dx.doi.org/10.7554/eLife.12572.001 PMID:26824391

  5. A multi-sensor RSS spatial sensing-based robust stochastic optimization algorithm for enhanced wireless tethering.

    PubMed

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-12-12

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.

  6. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    PubMed Central

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  7. A multi-sensor RSS spatial sensing-based robust stochastic optimization algorithm for enhanced wireless tethering.

    PubMed

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  8. Multimode fiber laser beam cleanup based on stochastic parallel gradient descent algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Hai-Chuan; Ma, Hao-Tong; Zhou, Pu; Wang, Xiao-Lin; Ma, Yan-Xing; Li, Xiao; Xu, Xiao-Jun; Zhao, Yi-Jun

    2011-01-01

    We present experimental research on multimode fiber laser beam cleanup based on a stochastic parallel gradient descent (SPGD) algorithm. The multimode laser is obtained by injecting a 1064 nm central wavelength single mode fiber laser into a multimode fiber and the system is setup by using phase only liquid crystal spatial light modulators (LC-SLM). The quality evaluation function is increased by a factor of 10.5 and 65% of the laser energy is encircled in the central lobe when the system evolves from open-loop into close-loop state. Experimental results indicate the feasibility of the multimode fiber laser beam cleanup by adaptive optics (AO).

  9. Runtime analysis of an evolutionary algorithm for stochastic multi-objective combinatorial optimization.

    PubMed

    Gutjahr, Walter J

    2012-01-01

    For stochastic multi-objective combinatorial optimization (SMOCO) problems, the adaptive Pareto sampling (APS) framework has been proposed, which is based on sampling and on the solution of deterministic multi-objective subproblems. We show that when plugging in the well-known simple evolutionary multi-objective optimizer (SEMO) as a subprocedure into APS, ε-dominance has to be used to achieve fast convergence to the Pareto front. Two general theorems are presented indicating how runtime complexity results for APS can be derived from corresponding results for SEMO. This may be a starting point for the runtime analysis of evolutionary SMOCO algorithms.

  10. Using genetic algorithm to solve a new multi-period stochastic optimization model

    NASA Astrophysics Data System (ADS)

    Zhang, Xin-Li; Zhang, Ke-Cun

    2009-09-01

    This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.

  11. Stochastic models: theory and simulation.

    SciTech Connect

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  12. Estimating the atmospheric correlation length with stochastic parallel gradient descent algorithm.

    PubMed

    Yazdani, R; Hajimahmoodzadeh, M; Fallah, H R

    2014-03-01

    The atmospheric turbulence measurement has received much attention in various fields due to its effects on wave propagation. One of the interesting parameters for characterization of the atmospheric turbulence is the Fried parameter or the atmospheric correlation length. We numerically investigate the feasibility of estimating the Fried parameter using a simple and low-cost system based on the stochastic parallel gradient descent (SPGD) algorithm without the need for wavefront sensing. We simulate the atmospheric turbulence using Zernike polynomials and employ a wavefront sensor-less adaptive optics system based on the SPGD algorithm and report the estimated Fried parameter after compensating for atmospheric-turbulence-induced phase distortions. Several simulations for different atmospheric turbulence strengths are presented to validate the proposed method.

  13. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications.

    PubMed

    Decelle, Aurelien; Krzakala, Florent; Moore, Cristopher; Zdeborová, Lenka

    2011-12-01

    In this paper we extend our previous work on the stochastic block model, a commonly used generative model for social and biological networks, and the problem of inferring functional groups or communities from the topology of the network. We use the cavity method of statistical physics to obtain an asymptotically exact analysis of the phase diagram. We describe in detail properties of the detectability-undetectability phase transition and the easy-hard phase transition for the community detection problem. Our analysis translates naturally into a belief propagation algorithm for inferring the group memberships of the nodes in an optimal way, i.e., that maximizes the overlap with the underlying group memberships, and learning the underlying parameters of the block model. Finally, we apply the algorithm to two examples of real-world networks and discuss its performance. PMID:22304154

  14. 2D stochastic-integral models for characterizing random grain noise in titanium alloys

    SciTech Connect

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Cherry, Matthew; Pilchak, Adam; Knopp, Jeremy S.; Blodgett, Mark P.

    2014-02-18

    We extend our previous work, in which we applied high-dimensional model representation (HDMR) and analysis of variance (ANOVA) concepts to the characterization of a metallic surface that has undergone a shot-peening treatment to reduce residual stresses, and has, therefore, become a random conductivity field. That example was treated as a onedimensional problem, because those were the only data available. In this study, we develop a more rigorous two-dimensional model for characterizing random, anisotropic grain noise in titanium alloys. Such a model is necessary if we are to accurately capture the 'clumping' of crystallites into long chains that appear during the processing of the metal into a finished product. The mathematical model starts with an application of the Karhunen-Loève (K-L) expansion for the random Euler angles, θ and φ, that characterize the orientation of each crystallite in the sample. The random orientation of each crystallite then defines the stochastic nature of the electrical conductivity tensor of the metal. We study two possible covariances, Gaussian and double-exponential, which are the kernel of the K-L integral equation, and find that the double-exponential appears to satisfy measurements more closely of the two. Results based on data from a Ti-7Al sample will be given, and further applications of HDMR and ANOVA will be discussed.

  15. Random Matrix Approach to Quantum Adiabatic Evolution Algorithms

    NASA Technical Reports Server (NTRS)

    Boulatov, Alexei; Smelyanskiy, Vadier N.

    2004-01-01

    We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.

  16. Combinatorial approximation algorithms for MAXCUT using random walks.

    SciTech Connect

    Seshadhri, Comandur; Kale, Satyen

    2010-11-01

    We give the first combinatorial approximation algorithm for MaxCut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an {tilde O}(n{sup b}) algorithm that outputs a (0.5 + {delta})-approximation for MaxCut, where {delta} = {delta}(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter {phi}, unless a random walk of length {ell} = O(log n) starting from i mixes rapidly (in terms of {phi} and {ell}), we can find a cut of conductance at most {phi} close to the vertex. The work done per vertex found in the cut is sublinear in n.

  17. Use of a Stochastic Joint Inversion Modeling Algorithm to Develop a Hydrothermal Flow Model at a Geothermal Prospect

    NASA Astrophysics Data System (ADS)

    Tompson, A. F. B.; Mellors, R. J.; Dyer, K.; Yang, X.; Chen, M.; Trainor Guitton, W.; Wagoner, J. L.; Ramirez, A. L.

    2014-12-01

    A stochastic joint inverse algorithm is used to analyze diverse geophysical and hydrologic data associated with a geothermal prospect. The approach uses a Markov Chain Monte Carlo (MCMC) global search algorithm to develop an ensemble of hydrothermal groundwater flow models that are most consistent with the observations. The algorithm utilizes an initial conceptual model descriptive of structural (geology), parametric (permeability) and hydrothermal (saturation, temperature) characteristics of the geologic system. Initial (a-priori) estimates of uncertainty in these characteristics are used to drive simulations of hydrothermal fluid flow and related geophysical processes in a large number of random realizations of the conceptual geothermal system spanning these uncertainties. The process seeks to improve the conceptual model by developing a ranked subset of model realizations that best match all available data within a specified norm or tolerance. Statistical (posterior) characteristics of these solutions reflect reductions in the a-priori uncertainties. The algorithm has been tested on a geothermal prospect located at Superstition Mountain, California and has been successful in creating a suite of models compatible with available temperature, surface resistivity, and magnetotelluric (MT) data. Although the MCMC method is highly flexible and capable of accommodating multiple and diverse datasets, a typical inversion may require the evaluation of thousands of possible model runs whose sophistication and complexity may evolve with the magnitude of data considered. As a result, we are testing the use of sensitivity analyses to better identify critical uncertain variables, lower order surrogate models to streamline computational costs, and value of information analyses to better assess optimal use of related data. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL

  18. A genetic-algorithm-aided stochastic optimization model for regional air quality management under uncertainty.

    PubMed

    Qin, Xiaosheng; Huang, Guohe; Liu, Lei

    2010-01-01

    A genetic-algorithm-aided stochastic optimization (GASO) model was developed in this study for supporting regional air quality management under uncertainty. The model incorporated genetic algorithm (GA) and Monte Carlo simulation techniques into a general stochastic chance-constrained programming (CCP) framework and allowed uncertainties in simulation and optimization model parameters to be considered explicitly in the design of least-cost strategies. GA was used to seek the optimal solution of the management model by progressively evaluating the performances of individual solutions. Monte Carlo simulation was used to check the feasibility of each solution. A management problem in terms of regional air pollution control was studied to demonstrate the applicability of the proposed method. Results of the case study indicated the proposed model could effectively communicate uncertainties into the optimization process and generate solutions that contained a spectrum of potential air pollutant treatment options with risk and cost information. Decision alternatives could be obtained by analyzing tradeoffs between the overall pollutant treatment cost and the system-failure risk due to inherent uncertainties.

  19. Low scaling algorithms for the random phase and GW approximation

    NASA Astrophysics Data System (ADS)

    Kaltak, Merzuk; Klimes, Jiri; Kresse, Georg

    2015-03-01

    The computationally most expensive step in conventional RPA implementations is the calculation of the independent particle polarizability χ0. We present an algorithm that calculates χ0 using the Green's function in real space and imaginary time. In combination with optimized non-uniform frequency and time grids the correlation energy on the random phase approximation level can be calculated efficiently with a computational cost that grows only cubically with system size. We apply this approach to calculate RPA defect energies of silicon using unit cells with up to 250 atoms and 128 CPU cores. Furthermore, we show how to extent the algorithm to the GW framework of Hedin and solve the Dyson equation for the Green's function with the same computational effort. This work was supported by the Austrian Spezialforschungsbereich Vienna Computational Materials Laboratory (SFB ViCoM) and the Deutsche Forschungsgruppe (FOR) 1346

  20. Stochastic description of geometric phase for polarized waves in random media

    NASA Astrophysics Data System (ADS)

    Boulanger, Jérémie; Le Bihan, Nicolas; Rossetto, Vincent

    2013-01-01

    We present a stochastic description of multiple scattering of polarized waves in the regime of forward scattering. In this regime, if the source is polarized, polarization survives along a few transport mean free paths, making it possible to measure an outgoing polarization distribution. We consider thin scattering media illuminated by a polarized source and compute the probability distribution function of the polarization on the exit surface. We solve the direct problem using compound Poisson processes on the rotation group SO(3) and non-commutative harmonic analysis. We obtain an exact expression for the polarization distribution which generalizes previous works and design an algorithm solving the inverse problem of estimating the scattering properties of the medium from the measured polarization distribution. This technique applies to thin disordered layers, spatially fluctuating media and multiple scattering systems and is based on the polarization but not on the signal amplitude. We suggest that it can be used as a non-invasive testing method.

  1. Single realization stochastic FDTD for weak scattering waves in biological random media

    PubMed Central

    Tan, Tengmeng; Taflove, Allen; Backman, Vadim

    2015-01-01

    This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result. PMID:27158153

  2. Stochastic Seismic Response of an Algiers Site with Random Depth to Bedrock

    SciTech Connect

    Badaoui, M.; Mebarki, A.; Berrah, M. K.

    2010-05-21

    Among the important effects of the Boumerdes earthquake (Algeria, May 21{sup st} 2003) was that, within the same zone, the destructions in certain parts were more important than in others. This phenomenon is due to site effects which alter the characteristics of seismic motions and cause concentration of damage during earthquakes. Local site effects such as thickness and mechanical properties of soil layers have important effects on the surface ground motions.This paper deals with the effect of the randomness aspect of the depth to bedrock (soil layers heights) which is assumed to be a random variable with lognormal distribution. This distribution is suitable for strictly non-negative random variables with large values of the coefficient of variation. In this case, Monte Carlo simulations are combined with the stiffness matrix method, used herein as a deterministic method, for evaluating the effect of the depth to bedrock uncertainty on the seismic response of a multilayered soil. This study considers a P and SV wave propagation pattern using input accelerations collected at Keddara station, located at 20 km from the epicenter, as it is located directly on the bedrock.A parametric study is conducted do derive the stochastic behavior of the peak ground acceleration and its response spectrum, the transfer function and the amplification factors. It is found that the soil height heterogeneity causes a widening of the frequency content and an increase in the fundamental frequency of the soil profile, indicating that the resonance phenomenon concerns a larger number of structures.

  3. Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.

    1988-01-01

    A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.

  4. Global stability and stochastic permanence of a non-autonomous logistic equation with random perturbation

    NASA Astrophysics Data System (ADS)

    Jiang, Daqing; Shi, Ningzhong; Li, Xiaoyue

    2008-04-01

    This paper discusses a randomized non-autonomous logistic equation , where B(t) is a 1-dimensional standard Brownian motion. In [D.Q. Jiang, N.Z. Shi, A note on non-autonomous logistic equation with random perturbation, J. Math. Anal. Appl. 303 (2005) 164-172], the authors show that E[1/N(t)] has a unique positive T-periodic solution E[1/Np(t)] provided a(t), b(t) and [alpha](t) are continuous T-periodic functions, a(t)>0, b(t)>0 and . We show that this equation is stochastically permanent and the solution Np(t) is globally attractive provided a(t), b(t) and [alpha](t) are continuous T-periodic functions, a(t)>0, b(t)>0 and mint[set membership, variant][0,T]a(t)>maxt[set membership, variant][0,T][alpha]2(t). By the way, the similar results of a generalized non-autonomous logistic equation with random perturbation are yielded.

  5. Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays.

    PubMed

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2016-01-01

    This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices.

  6. Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays.

    PubMed

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2016-01-01

    This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices. PMID:27338387

  7. Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays

    PubMed Central

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2016-01-01

    This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices. PMID:27338387

  8. Tools and Algorithms to Link Horizontal Hydrologic and Vertical Hydrodynamic Models and Provide a Stochastic Modeling Framework

    NASA Astrophysics Data System (ADS)

    Salah, Ahmad M.; Nelson, E. James; Williams, Gustavious P.

    2010-04-01

    We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS) which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.

  9. A Stochastic Framework For Sediment Concentration Estimation By Accounting Random Arrival Processes Of Incoming Particles Into Receiving Waters

    NASA Astrophysics Data System (ADS)

    Tsai, C.; Hung, R. J.

    2015-12-01

    This study attempts to apply queueing theory to develop a stochastic framework that could account for the random-sized batch arrivals of incoming sediment particles into receiving waters. Sediment particles, control volume, mechanics of sediment transport (such as mechanics of suspension, deposition and resuspension) are treated as the customers, service facility and the server respectively in queueing theory. In the framework, the stochastic diffusion particle tracking model (SD-PTM) and resuspension of particles are included to simulate the random transport trajectories of suspended particles. The most distinguished characteristic of queueing theory is that customers come to the service facility in a random manner. In analogy to sediment transport, this characteristic is adopted to model the random-sized batch arrival process of sediment particles including the random occurrences and random magnitude of incoming sediment particles. The random occurrences of arrivals are simulated by Poisson process while the number of sediment particles in each arrival can be simulated by a binominal distribution. Simulations of random arrivals and random magnitude are proposed individually to compare with the random-sized batch arrival simulations. Simulation results are a probabilistic description for discrete sediment transport through ensemble statistics (i.e. ensemble means and ensemble variances) of sediment concentrations and transport rates. Results reveal the different mechanisms of incoming particles will result in differences in the ensemble variances of concentrations and transport rates under the same mean incoming rate of sediment particles.

  10. A new stochastic algorithm for proton exchange membrane fuel cell stack design optimization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Uttara

    2012-10-01

    This paper develops a new stochastic heuristic for proton exchange membrane fuel cell stack design optimization. The problem involves finding the optimal size and configuration of stand-alone, fuel-cell-based power supply systems: the stack is to be configured so that it delivers the maximum power output at the load's operating voltage. The problem apparently looks straightforward but is analytically intractable and computationally hard. No exact solution can be found, nor is it easy to find the exact number of local optima; we, therefore, are forced to settle with approximate or near-optimal solutions. This real-world problem, first reported in Journal of Power Sources 131, poses both engineering challenges and computational challenges and is representative of many of today's open problems in fuel cell design involving a mix of discrete and continuous parameters. The new algorithm is compared against genetic algorithm, simulated annealing, and (1+1)-EA. Statistical tests of significance show that the results produced by our method are better than the best-known solutions for this problem published in the literature. A finite Markov chain analysis of the new algorithm establishes an upper bound on the expected time to find the optimum solution.

  11. A new algorithm for calculating the curvature perturbations in stochastic inflation

    SciTech Connect

    Fujita, Tomohiro; Kawasaki, Masahiro; Tada, Yuichiro; Takesako, Tomohiro E-mail: kawasaki@icrr.u-tokyo.ac.jp E-mail: takesako@icrr.u-tokyo.ac.jp

    2013-12-01

    We propose a new approach for calculating the curvature perturbations produced during inflation in the stochastic formalism. In our formalism, the fluctuations of the e-foldings are directly calculated without perturbatively expanding the inflaton field and they are connected to the curvature perturbations by the δN formalism. The result automatically includes the contributions of the higher order perturbations because we solve the equation of motion non-perturbatively. In this paper, we analytically prove that our result (the power spectrum and the nonlinearity parameter) is consistent with the standard result in single field slow-roll inflation. We also describe the algorithm for numerical calculations of the curvature perturbations in more general inflation models.

  12. Stochastic generation of explicit pore structures by thresholding Gaussian random fields

    SciTech Connect

    Hyman, Jeffrey D.; Winter, C. Larrabee

    2014-11-15

    We provide a description and computational investigation of an efficient method to stochastically generate realistic pore structures. Smolarkiewicz and Winter introduced this specific method in pores resolving simulation of Darcy flows (Smolarkiewicz and Winter, 2010 [1]) without giving a complete formal description or analysis of the method, or indicating how to control the parameterization of the ensemble. We address both issues in this paper. The method consists of two steps. First, a realization of a correlated Gaussian field, or topography, is produced by convolving a prescribed kernel with an initial field of independent, identically distributed random variables. The intrinsic length scales of the kernel determine the correlation structure of the topography. Next, a sample pore space is generated by applying a level threshold to the Gaussian field realization: points are assigned to the void phase or the solid phase depending on whether the topography over them is above or below the threshold. Hence, the topology and geometry of the pore space depend on the form of the kernel and the level threshold. Manipulating these two user prescribed quantities allows good control of pore space observables, in particular the Minkowski functionals. Extensions of the method to generate media with multiple pore structures and preferential flow directions are also discussed. To demonstrate its usefulness, the method is used to generate a pore space with physical and hydrological properties similar to a sample of Berea sandstone. -- Graphical abstract: -- Highlights: •An efficient method to stochastically generate realistic pore structures is provided. •Samples are generated by applying a level threshold to a Gaussian field realization. •Two user prescribed quantities determine the topology and geometry of the pore space. •Multiple pore structures and preferential flow directions can be produced. •A pore space based on Berea sandstone is generated.

  13. Random bistochastic matrices

    NASA Astrophysics Data System (ADS)

    Cappellini, Valerio; Sommers, Hans-Jürgen; Bruzda, Wojciech; Życzkowski, Karol

    2009-09-01

    Ensembles of random stochastic and bistochastic matrices are investigated. While all columns of a random stochastic matrix can be chosen independently, the rows and columns of a bistochastic matrix have to be correlated. We evaluate the probability measure induced into the Birkhoff polytope of bistochastic matrices by applying the Sinkhorn algorithm to a given ensemble of random stochastic matrices. For matrices of order N = 2 we derive explicit formulae for the probability distributions induced by random stochastic matrices with columns distributed according to the Dirichlet distribution. For arbitrary N we construct an initial ensemble of stochastic matrices which allows one to generate random bistochastic matrices according to a distribution locally flat at the center of the Birkhoff polytope. The value of the probability density at this point enables us to obtain an estimation of the volume of the Birkhoff polytope, consistent with recent asymptotic results.

  14. Randomized tree construction algorithm to explore energy landscapes.

    PubMed

    Jaillet, Léonard; Corcho, Francesc J; Pérez, Juan-Jesús; Cortés, Juan

    2011-12-01

    In this work, a new method for exploring conformational energy landscapes is described. The method, called transition-rapidly exploring random tree (T-RRT), combines ideas from statistical physics and robot path planning algorithms. A search tree is constructed on the conformational space starting from a given state. The tree expansion is driven by a double strategy: on the one hand, it is naturally biased toward yet unexplored regions of the space; on the other, a Monte Carlo-like transition test guides the expansion toward energetically favorable regions. The balance between these two strategies is automatically achieved due to a self-tuning mechanism. The method is able to efficiently find both energy minima and transition paths between them. As a proof of concept, the method is applied to two academic benchmarks and the alanine dipeptide.

  15. Randomized tree construction algorithm to explore energy landscapes.

    PubMed

    Jaillet, Léonard; Corcho, Francesc J; Pérez, Juan-Jesús; Cortés, Juan

    2011-12-01

    In this work, a new method for exploring conformational energy landscapes is described. The method, called transition-rapidly exploring random tree (T-RRT), combines ideas from statistical physics and robot path planning algorithms. A search tree is constructed on the conformational space starting from a given state. The tree expansion is driven by a double strategy: on the one hand, it is naturally biased toward yet unexplored regions of the space; on the other, a Monte Carlo-like transition test guides the expansion toward energetically favorable regions. The balance between these two strategies is automatically achieved due to a self-tuning mechanism. The method is able to efficiently find both energy minima and transition paths between them. As a proof of concept, the method is applied to two academic benchmarks and the alanine dipeptide. PMID:21919017

  16. Random matrix approach to quantum adiabatic evolution algorithms

    SciTech Connect

    Boulatov, A.; Smelyanskiy, V.N.

    2005-05-15

    We analyze the power of the quantum adiabatic evolution algorithm (QAA) for solving random computationally hard optimization problems within a theoretical framework based on random matrix theory (RMT). We present two types of driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that nonadiabatic corrections in the QAA are due to the interaction of the ground state with the 'cloud' formed by most of the excited states, confirming that in driven RMT models, the Landau-Zener scenario of pairwise level repulsions is not relevant for the description of nonadiabatic corrections. We show that the QAA has a finite probability of success in a certain range of parameters, implying a polynomial complexity of the algorithm. The second model corresponds to the standard QAA with the problem Hamiltonian taken from the RMT Gaussian unitary ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. For this reason, the driven GUE model can also lead to polynomial complexity of the QAA. The main contribution to the failure probability of the QAA comes from the nonadiabatic corrections to the eigenstates, which only depend on the absolute values of the transition amplitudes. Due to the mapping between the two models, these absolute values are the same in both cases. Our results indicate that this 'phase irrelevance' is the leading effect that can make both the Markovian- and GUE-type QAAs successful.

  17. Stochastic chemical kinetics and the total quasi-steady-state assumption: application to the stochastic simulation algorithm and chemical master equation.

    PubMed

    Macnamara, Shev; Bersani, Alberto M; Burrage, Kevin; Sidje, Roger B

    2008-09-01

    Recently the application of the quasi-steady-state approximation (QSSA) to the stochastic simulation algorithm (SSA) was suggested for the purpose of speeding up stochastic simulations of chemical systems that involve both relatively fast and slow chemical reactions [Rao and Arkin, J. Chem. Phys. 118, 4999 (2003)] and further work has led to the nested and slow-scale SSA. Improved numerical efficiency is obtained by respecting the vastly different time scales characterizing the system and then by advancing only the slow reactions exactly, based on a suitable approximation to the fast reactions. We considerably extend these works by applying the QSSA to numerical methods for the direct solution of the chemical master equation (CME) and, in particular, to the finite state projection algorithm [Munsky and Khammash, J. Chem. Phys. 124, 044104 (2006)], in conjunction with Krylov methods. In addition, we point out some important connections to the literature on the (deterministic) total QSSA (tQSSA) and place the stochastic analogue of the QSSA within the more general framework of aggregation of Markov processes. We demonstrate the new methods on four examples: Michaelis-Menten enzyme kinetics, double phosphorylation, the Goldbeter-Koshland switch, and the mitogen activated protein kinase cascade. Overall, we report dramatic improvements by applying the tQSSA to the CME solver.

  18. Stochastic optimal foraging: tuning intensive and extensive dynamics in random searches.

    PubMed

    Bartumeus, Frederic; Raposo, Ernesto P; Viswanathan, Gandhimohan M; da Luz, Marcos G E

    2014-01-01

    Recent theoretical developments had laid down the proper mathematical means to understand how the structural complexity of search patterns may improve foraging efficiency. Under information-deprived scenarios and specific landscape configurations, Lévy walks and flights are known to lead to high search efficiencies. Based on a one-dimensional comparative analysis we show a mechanism by which, at random, a searcher can optimize the encounter with close and distant targets. The mechanism consists of combining an optimal diffusivity (optimally enhanced diffusion) with a minimal diffusion constant. In such a way the search dynamics adequately balances the tension between finding close and distant targets, while, at the same time, shifts the optimal balance towards relatively larger close-to-distant target encounter ratios. We find that introducing a multiscale set of reorientations ensures both a thorough local space exploration without oversampling and a fast spreading dynamics at the large scale. Lévy reorientation patterns account for these properties but other reorientation strategies providing similar statistical signatures can mimic or achieve comparable efficiencies. Hence, the present work unveils general mechanisms underlying efficient random search, beyond the Lévy model. Our results suggest that animals could tune key statistical movement properties (e.g. enhanced diffusivity, minimal diffusion constant) to cope with the very general problem of balancing out intensive and extensive random searching. We believe that theoretical developments to mechanistically understand stochastic search strategies, such as the one here proposed, are crucial to develop an empirically verifiable and comprehensive animal foraging theory. PMID:25216191

  19. Random finite set multi-target trackers: stochastic geometry for space situational awareness

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong

    2015-05-01

    This paper describes the recent development in the random finite set RFS paradigm in multi-target tracking. Over the last decade the Probability Hypothesis Density filter has become synonymous with the RFS approach. As result the PHD filter is often wrongly used as a performance benchmark for the RFS approach. Since there is a suite of RFS-based multi-target tracking algorithms, benchmarking tracking performance of the RFS approach by using the PHD filter, the cheapest of these, is misleading. Such benchmarking should be performed with more sophisticated RFS algorithms. In this paper we outline the high-performance RFS-based multi-target trackers such that the Generalized Labled Multi-Bernoulli filter, and a number of efficient approximations and discuss extensions and applications of these filters. Applications to space situational awareness are discussed.

  20. Inner Random Restart Genetic Algorithm for Practical Delivery Schedule Optimization

    NASA Astrophysics Data System (ADS)

    Sakurai, Yoshitaka; Takada, Kouhei; Onoyama, Takashi; Tsukamoto, Natsuki; Tsuruta, Setsuo

    A delivery route optimization that improves the efficiency of real time delivery or a distribution network requires solving several tens to hundreds but less than 2 thousands cities Traveling Salesman Problems (TSP) within interactive response time (less than about 3 second), with expert-level accuracy (less than about 3% of error rate). Further, to make things more difficult, the optimization is subjects to special requirements or preferences of each various delivery sites, persons, or societies. To meet these requirements, an Inner Random Restart Genetic Algorithm (Irr-GA) is proposed and developed. This method combines meta-heuristics such as random restart and GA having different types of simple heuristics. Such simple heuristics are 2-opt and NI (Nearest Insertion) methods, each applied for gene operations. The proposed method is hierarchical structured, integrating meta-heuristics and heuristics both of which are multiple but simple. This method is elaborated so that field experts as well as field engineers can easily understand to make the solution or method easily customized and extended according to customers' needs or taste. Comparison based on the experimental results and consideration proved that the method meets the above requirements more than other methods judging from not only optimality but also simplicity, flexibility, and expandability in order for this method to be practically used.

  1. A stochastic simulation method for the assessment of resistive random access memory retention reliability

    SciTech Connect

    Berco, Dan Tseng, Tseung-Yuen

    2015-12-21

    This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.

  2. A matrix product algorithm for stochastic dynamics on locally tree-like graphs

    NASA Astrophysics Data System (ADS)

    Barthel, Thomas; de Bacco, Caterina; Franz, Silvio

    In this talk, I describe a novel algorithm for the efficient simulation of generic stochastic dynamics of classical degrees of freedom defined on the vertices of locally tree-like graphs. Such models correspond for example to spin-glass systems, Boolean networks, neural networks, or other technological, biological, and social networks. Building upon the cavity method and ideas from quantum many-body theory, the algorithm is based on a matrix product approximation of the so-called edge messages - conditional probabilities of vertex variable trajectories. The matrix product edge messages (MPEM) are constructed recursively. Computation costs and accuracy can be tuned by controlling the matrix dimensions of the MPEM in truncations. In contrast to Monte Carlo simulations, the approach has a better error scaling and works for both, single instances as well as the thermodynamic limit. Due to the absence of cancellation effects, observables with small expectation values can be evaluated accurately, allowing for the study of decay processes and temporal correlations with unprecedented accuracy. The method is demonstrated for the prototypical non-equilibrium Glauber dynamics of an Ising spin system. Reference: arXiv:1508.03295.

  3. Stochastic resonance whole-body vibration improves postural control in health care professionals: a worksite randomized controlled trial.

    PubMed

    Elfering, Achim; Schade, Volker; Stoecklin, Lukas; Baur, Simone; Burger, Christian; Radlinger, Lorenz

    2014-05-01

    Slip, trip, and fall injuries are frequent among health care workers. Stochastic resonance whole-body vibration training was tested to improve postural control. Participants included 124 employees of a Swiss university hospital. The randomized controlled trial included an experimental group given 8 weeks of training and a control group with no intervention. In both groups, postural control was assessed as mediolateral sway on a force plate before and after the 8-week trial. Mediolateral sway was significantly decreased by stochastic resonance whole-body vibration training in the experimental group but not in the control group that received no training (p < .05). Stochastic resonance whole-body vibration training is an option in the primary prevention of balance-related injury at work.

  4. Phase-distortion correction based on stochastic parallel proportional-integral-derivative algorithm for high-resolution adaptive optics

    NASA Astrophysics Data System (ADS)

    Sun, Yang; Wu, Ke-nan; Gao, Hong; Jin, Yu-qi

    2015-02-01

    A novel optimization method, stochastic parallel proportional-integral-derivative (SPPID) algorithm, is proposed for high-resolution phase-distortion correction in wave-front sensorless adaptive optics (WSAO). To enhance the global search and self-adaptation of stochastic parallel gradient descent (SPGD) algorithm, residual error and its temporal integration of performance metric are added in to incremental control signal's calculation. On the basis of the maximum fitting rate between real wave-front and corrector, a goal value of metric is set as the reference. The residual error of the metric relative to reference is transformed into proportional and integration terms to produce adaptive step size updating law of SPGD algorithm. The adaptation of step size leads blind optimization to desired goal and helps escape from local extrema. Different from conventional proportional-integral -derivative (PID) algorithm, SPPID algorithm designs incremental control signal as PI-by-D for adaptive adjustment of control law in SPGD algorithm. Experiments of high-resolution phase-distortion correction in "frozen" turbulences based on influence function coefficients optimization were carried out respectively using 128-by-128 typed spatial light modulators, photo detector and control computer. Results revealed the presented algorithm offered better performance in both cases. The step size update based on residual error and its temporal integration was justified to resolve severe local lock-in problem of SPGD algorithm used in high -resolution adaptive optics.

  5. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  6. An accurate treatment of diffuse reflection boundary conditions for a stochastic particle Fokker-Planck algorithm with large time steps

    NASA Astrophysics Data System (ADS)

    Önskog, Thomas; Zhang, Jun

    2015-12-01

    In this paper, we present a stochastic particle algorithm for the simulation of flows of wall-confined gases with diffuse reflection boundary conditions. Based on the theoretical observation that the change in location of the particles consists of a deterministic part and a Wiener process if the time scale is much larger than the relaxation time, a new estimate for the first hitting time at the boundary is obtained. This estimate facilitates the construction of an algorithm with large time steps for wall-confined flows. Numerical simulations verify that the proposed algorithm reproduces the correct boundary behaviour.

  7. A Stochastic Algorithm for the Isobaric-Isothermal Ensemble with Ewald Summations for All Long Range Forces.

    PubMed

    Di Pierro, Michele; Elber, Ron; Leimkuhler, Benedict

    2015-12-01

    We present an algorithm termed COMPEL (COnstant Molecular Pressure with Ewald sum for Long range forces) to conduct simulations in the NPT ensemble. The algorithm combines novel features recently proposed in the literature to obtain a highly efficient and accurate numerical integrator. COMPEL exploits the concepts of molecular pressure, rapid stochastic relaxation to equilibrium, exact calculation of the contribution to the pressure of long-range nonbonded forces with Ewald summation, and the use of Trotter expansion to generate a robust, highly stable, symmetric, and accurate algorithm. Explicit implementation in the MOIL program and illustrative numerical examples are discussed. PMID:26616351

  8. A Stochastic Algorithm for the Isobaric-Isothermal Ensemble with Ewald Summations for All Long Range Forces.

    PubMed

    Di Pierro, Michele; Elber, Ron; Leimkuhler, Benedict

    2015-12-01

    We present an algorithm termed COMPEL (COnstant Molecular Pressure with Ewald sum for Long range forces) to conduct simulations in the NPT ensemble. The algorithm combines novel features recently proposed in the literature to obtain a highly efficient and accurate numerical integrator. COMPEL exploits the concepts of molecular pressure, rapid stochastic relaxation to equilibrium, exact calculation of the contribution to the pressure of long-range nonbonded forces with Ewald summation, and the use of Trotter expansion to generate a robust, highly stable, symmetric, and accurate algorithm. Explicit implementation in the MOIL program and illustrative numerical examples are discussed.

  9. Stochastic generation of explicit pore structures by thresholding Gaussian random fields

    NASA Astrophysics Data System (ADS)

    Hyman, Jeffrey D.; Winter, C. Larrabee

    2014-11-01

    We provide a description and computational investigation of an efficient method to stochastically generate realistic pore structures. Smolarkiewicz and Winter introduced this specific method in pores resolving simulation of Darcy flows (Smolarkiewicz and Winter, 2010 [1]) without giving a complete formal description or analysis of the method, or indicating how to control the parameterization of the ensemble. We address both issues in this paper. The method consists of two steps. First, a realization of a correlated Gaussian field, or topography, is produced by convolving a prescribed kernel with an initial field of independent, identically distributed random variables. The intrinsic length scales of the kernel determine the correlation structure of the topography. Next, a sample pore space is generated by applying a level threshold to the Gaussian field realization: points are assigned to the void phase or the solid phase depending on whether the topography over them is above or below the threshold. Hence, the topology and geometry of the pore space depend on the form of the kernel and the level threshold. Manipulating these two user prescribed quantities allows good control of pore space observables, in particular the Minkowski functionals. Extensions of the method to generate media with multiple pore structures and preferential flow directions are also discussed. To demonstrate its usefulness, the method is used to generate a pore space with physical and hydrological properties similar to a sample of Berea sandstone.

  10. The analysis of a sparse grid stochastic collocation method for partial differential equations with high-dimensional random input data.

    SciTech Connect

    Webster, Clayton; Tempone, Raul; Nobile, Fabio

    2007-12-01

    This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.

  11. MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D

    2013-01-01

    Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.

  12. A Hybrid of the Chemical Master Equation and the Gillespie Algorithm for Efficient Stochastic Simulations of Sub-Networks.

    PubMed

    Albert, Jaroslav

    2016-01-01

    Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology--the gene switch and the Griffith model of a genetic oscillator--and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them.

  13. A Hybrid of the Chemical Master Equation and the Gillespie Algorithm for Efficient Stochastic Simulations of Sub-Networks

    PubMed Central

    Albert, Jaroslav

    2016-01-01

    Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology - the gene switch and the Griffith model of a genetic oscillator—and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them. PMID:26930199

  14. A planning model with a solution algorithm for ready mixed concrete production and truck dispatching under stochastic travel times

    NASA Astrophysics Data System (ADS)

    Yan, S.; Lin, H. C.; Jiang, X. Y.

    2012-04-01

    In this study the authors employ network flow techniques to construct a systematic model that helps ready mixed concrete carriers effectively plan production and truck dispatching schedules under stochastic travel times. The model is formulated as a mixed integer network flow problem with side constraints. Problem decomposition and relaxation techniques, coupled with the CPLEX mathematical programming solver, are employed to develop an algorithm that is capable of efficiently solving the problems. A simulation-based evaluation method is also proposed to evaluate the model, coupled with a deterministic model, and the method currently used in actual operations. Finally, a case study is performed using real operating data from a Taiwan RMC firm. The test results show that the system operating cost obtained using the stochastic model is a significant improvement over that obtained using the deterministic model or the manual approach. Consequently, the model and the solution algorithm could be useful for actual operations.

  15. Biased Randomized Algorithm for Fast Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Williams, Colin; Vartan, Farrokh

    2005-01-01

    A biased randomized algorithm has been developed to enable the rapid computational solution of a propositional- satisfiability (SAT) problem equivalent to a diagnosis problem. The closest competing methods of automated diagnosis are described in the preceding article "Fast Algorithms for Model-Based Diagnosis" and "Two Methods of Efficient Solution of the Hitting-Set Problem" (NPO-30584), which appears elsewhere in this issue. It is necessary to recapitulate some of the information from the cited articles as a prerequisite to a description of the present method. As used here, "diagnosis" signifies, more precisely, a type of model-based diagnosis in which one explores any logical inconsistencies between the observed and expected behaviors of an engineering system. The function of each component and the interconnections among all the components of the engineering system are represented as a logical system. Hence, the expected behavior of the engineering system is represented as a set of logical consequences. Faulty components lead to inconsistency between the observed and expected behaviors of the system, represented by logical inconsistencies. Diagnosis - the task of finding the faulty components - reduces to finding the components, the abnormalities of which could explain all the logical inconsistencies. One seeks a minimal set of faulty components (denoted a minimal diagnosis), because the trivial solution, in which all components are deemed to be faulty, always explains all inconsistencies. In the methods of the cited articles, the minimal-diagnosis problem is treated as equivalent to a minimal-hitting-set problem, which is translated from a combinatorial to a computational problem by mapping it onto the Boolean-satisfiability and integer-programming problems. The integer-programming approach taken in one of the prior methods is complete (in the sense that it is guaranteed to find a solution if one exists) and slow and yields a lower bound on the size of the

  16. Phase locking of a seven-channel continuous wave fibre laser system by a stochastic parallel gradient algorithm

    SciTech Connect

    Volkov, M V; Garanin, S G; Dolgopolov, Yu V; Kopalkin, A V; Kulikov, S M; Sinyavin, D N; Starikov, F A; Sukharev, S A; Tyutin, S V; Khokhlov, S V; Chaparin, D A

    2014-11-30

    A seven-channel fibre laser system operated by the master oscillator – multichannel power amplifier scheme is the phase locked using a stochastic parallel gradient algorithm. The phase modulators on lithium niobate crystals are controlled by a multichannel electronic unit with the microcontroller processing signals in real time. The dynamic phase locking of the laser system with the bandwidth of 14 kHz is demonstrated, the time of phasing is 3 – 4 ms. (fibre and integrated-optical structures)

  17. Phase Transitions in Sampling Algorithms and the Underlying Random Structures

    NASA Astrophysics Data System (ADS)

    Randall, Dana

    Sampling algorithms based on Markov chains arise in many areas of computing, engineering and science. The idea is to perform a random walk among the elements of a large state space so that samples chosen from the stationary distribution are useful for the application. In order to get reliable results, we require the chain to be rapidly mixing, or quickly converging to equilibrium. For example, to sample independent sets in a given graph G, the so-called hard-core lattice gas model, we can start at any independent set and repeatedly add or remove a single vertex (if allowed). By defining the transition probabilities of these moves appropriately, we can ensure that the chain will converge to a use- ful distribution over the state space Ω. For instance, the Gibbs (or Boltzmann) distribution, parameterized by Λ> 0, is defined so that p(Λ) = π(I) = Λ|I| /Z, where Z = sum_{J in Ω} Λ^{|J|} is the normalizing constant known as the partition function. An interesting phenomenon occurs as Λ is varied. For small values of Λ, local Markov chains converge quickly to stationarity, while for large values, they are prohibitively slow. To see why, imagine the underlying graph G is a region of the Cartesian lattice. Large independent sets will dominate the stationary distribution π when Λ is sufficiently large, and yet it will take a very long time to move from an independent set lying mostly on the odd sublattice to one that is mostly even. This phenomenon is well known in the statistical physics community, and characterizes by a phase transition in the underlying model.

  18. Adaptive randomized algorithms for analysis and design of control systems under uncertain environments

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia

    2015-05-01

    We consider the general problem of analysis and design of control systems in the presence of uncertainties. We treat uncertainties that affect a control system as random variables. The performance of the system is measured by the expectation of some derived random variables, which are typically bounded. We develop adaptive sequential randomized algorithms for estimating and optimizing the expectation of such bounded random variables with guaranteed accuracy and confidence level. These algorithms can be applied to overcome the conservatism and computational complexity in the analysis and design of controllers to be used in uncertain environments. We develop methods for investigating the optimality and computational complexity of such algorithms.

  19. A Stochastic Simulation Framework for the Prediction of Strategic Noise Mapping and Occupational Noise Exposure Using the Random Walk Approach

    PubMed Central

    Haron, Zaiton; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri

    2015-01-01

    Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces. PMID:25875019

  20. A stochastic simulation framework for the prediction of strategic noise mapping and occupational noise exposure using the random walk approach.

    PubMed

    Han, Lim Ming; Haron, Zaiton; Yahya, Khairulzan; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri

    2015-01-01

    Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces.

  1. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Chao; Bao, Wan-Su; Wang, Xiang; Fu, Xiang-Qun

    2015-06-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002).

  2. Random sparse sampling strategy using stochastic simulation and estimation for a population pharmacokinetic study

    PubMed Central

    Huang, Xiao-hui; Wang, Kun; Huang, Ji-han; Xu, Ling; Li, Lu-jin; Sheng, Yu-cheng; Zheng, Qing-shan

    2013-01-01

    The purpose of this study was to use the stochastic simulation and estimation method to evaluate the effects of sample size and the number of samples per individual on the model development and evaluation. The pharmacokinetic parameters and inter- and intra-individual variation were obtained from a population pharmacokinetic model of clinical trials of amlodipine. Stochastic simulation and estimation were performed to evaluate the efficiencies of different sparse sampling scenarios to estimate the compartment model. Simulated data were generated a 1000 times and three candidate models were used to fit the 1000 data sets. Fifty-five kinds of sparse sampling scenarios were investigated and compared. The results showed that, 60 samples with three points and 20 samples with five points are recommended, and the quantitative methodology of stochastic simulation and estimation is valuable for efficiently estimating the compartment model and can be used for other similar model development and evaluation approaches. PMID:24493975

  3. Emergence of Heavy-Tailed Distributions in a Random Multiplicative Model Driven by a Gaussian Stochastic Process

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan

    2014-02-01

    We consider a random multiplicative stochastic process with multipliers given by the exponential of a Brownian motion. The positive integer moments of the distribution function can be computed exactly, and can be represented as the grand partition function of an equivalent lattice gas with attractive 2-body interactions. The numerical results for the positive integer moments display a sharp transition at a critical value of the model parameters, which corresponds to a phase transition in the equivalent lattice gas model. The shape of the terminal distribution changes suddenly at the critical point to a heavy-tailed distribution. The transition can be related to the position of the complex zeros of the grand partition function of the lattice gas, in analogy with the Lee, Yang picture of phase transitions in statistical mechanics. We study the properties of the equivalent lattice gas in the thermodynamical limit, which corresponds to the continuous time limit of the random multiplicative model, and derive the asymptotics of the approach to the continuous time limit. The results can be generalized to a wider class of random multiplicative processes, driven by the exponential of a Gaussian stochastic process.

  4. Diffusion and stochastic island generation in the magnetic field line random walk

    SciTech Connect

    Vlad, M.; Spineanu, F.

    2014-08-10

    The cross-field diffusion of field lines in stochastic magnetic fields described by the 2D+slab model is studied using a semi-analytic statistical approach, the decorrelation trajectory method. We show that field line trapping and the associated stochastic magnetic islands strongly influence the diffusion coefficients, leading to dependences on the parameters that are different from the quasilinear and Bohm regimes. A strong amplification of the diffusion is produced by a small slab field in the presence of trapping. The diffusion regimes are determined and the corresponding physical processes are identified.

  5. Stochastic contraction-based observer and controller design algorithm with application to a flight vehicle

    NASA Astrophysics Data System (ADS)

    Mohamed, Majeed; Narayan Kar, Indra

    2015-11-01

    This paper focuses on a stochastic version of contraction theory to construct observer-controller structure for a flight dynamic system with noisy velocity measurement. A nonlinear stochastic observer is designed to estimate the pitch rate, the pitch angle, and the velocity of an aircraft example model using stochastic contraction theory. Estimated states are used to compute feedback control for solving a tracking problem. The structure and gain selection of the observer is carried out using Itô's stochastic differential equations and the contraction theory. The contraction property of integrated observer-controller structure is derived to ensure the exponential convergence of the trajectories of closed-loop nonlinear system. The upper bound of the state estimation error is explicitly derived and the efficacy of the proposed observer-controller structure has been shown through the numerical simulations.

  6. Random traveling wave and bifurcations of asymptotic behaviors in the stochastic KPP equation driven by dual noises

    NASA Astrophysics Data System (ADS)

    Huang, Zhehao; Liu, Zhengrong

    2016-07-01

    In this paper, we study the influences of dually environmental noises on the traveling wave which develops from the deterministic KPP equation. We prove that if the strengths of noises satisfy some condition, the solution of the stochastic KPP equation with Heaviside initial condition develops a random traveling wave, whose wave speed is deterministic and depends on the strengths of noises. If the strengths of noises satisfy some other conditions, the solution tends to zero as time tends to infinity. Therefore, there exist bifurcations of asymptotic behaviors of solution induced by the strengths of dual noises.

  7. A Bayesian 3D data fusion and unsupervised joint segmentation approach for stochastic geological modelling using Hidden Markov random fields

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Wellmann, Florian

    2016-04-01

    It is generally accepted that 3D geological models inferred from observed data will contain a certain amount of uncertainties. The uncertainty quantification and stochastic sampling methods are essential for gaining the insight into the geological variability of subsurface structures. In the community of deterministic or traditional modelling techniques, classical geo-statistical methods using boreholes (hard data sets) are still most widely accepted although suffering certain drawbacks. Modern geophysical measurements provide us regional data sets in 2D or 3D spaces either directly from sensors or indirectly from inverse problem solving using observed signal (soft data sets). We propose a stochastic modelling framework to extract subsurface heterogeneity from multiple and complementary types of data. In the presented work, subsurface heterogeneity is considered as the "hidden link" among multiple spatial data sets as well as inversion results. Hidden Markov random field models are employed to perform 3D segmentation which is the representation of the "hidden link". Finite Gaussian mixture models are adopted to characterize the statistical parameters of the multiple data sets. The uncertainties are quantified via a Gibbs sampling process under the Bayesian inferential framework. The proposed modelling framework is validated using two numerical examples. The model behavior and convergence are also well examined. It is shown that the presented stochastic modelling framework is a promising tool for the 3D data fusion in the communities of geological modelling and geophysics.

  8. Stochastic nonlinear wave equation with memory driven by compensated Poisson random measures

    SciTech Connect

    Liang, Fei; Gao, Hongjun

    2014-03-15

    In this paper, we study a class of stochastic nonlinear wave equation with memory driven by Lévy noise. We first show the existence and uniqueness of global mild solutions using a suitable energy function. Second, under some additional assumptions we prove the exponential stability of the solutions.

  9. A stochastic thermostat algorithm for coarse-grained thermomechanical modeling of large-scale soft matters: Theory and application to microfilaments

    SciTech Connect

    Li, Tong; Gu, YuanTong

    2014-04-15

    As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.

  10. A stochastic thermostat algorithm for coarse-grained thermomechanical modeling of large-scale soft matters: Theory and application to microfilaments

    NASA Astrophysics Data System (ADS)

    Li, Tong; Gu, YuanTong

    2014-04-01

    As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.

  11. Exact Mapping of the Stochastic Field Theory for Manna Sandpiles to Interfaces in Random Media

    NASA Astrophysics Data System (ADS)

    Le Doussal, Pierre; Wiese, Kay Jörg

    2015-03-01

    We show that the stochastic field theory for directed percolation in the presence of an additional conservation law [the conserved directed-percolation (C-DP) class] can be mapped exactly to the continuum theory for the depinning of an elastic interface in short-range correlated quenched disorder. Along one line of the parameters commonly studied, this mapping leads to the simplest overdamped dynamics. Away from this line, an additional memory term arises in the interface dynamics; we argue that this does not change the universality class. Since C-DP is believed to describe the Manna class of self-organized criticality, this shows that Manna stochastic sandpiles and disordered elastic interfaces (i.e., the quenched Edwards-Wilkinson model) share the same universal large-scale behavior.

  12. Can stochastic, dissipative wave fields be treated as random walk generators

    NASA Technical Reports Server (NTRS)

    Weinstock, J.

    1986-01-01

    A suggestion by Meek et al. (1985) that the gravity wave field be viewed as stochastic, with significant nonlinearities, is applied to calculate diffusivities. The purpose here is to calculate the diffusivity for stochastic wave model and compare it with previous diffusivity estimates. The researchers do this for an idealized case in which the wind velocity changes but slowly, and for which saturation is the principal mechanism by which wave energy is lost. A related calculation was given in a very brief way (Weinstock, 1976), but the approximations were not fully justified, nor were the physical pre-suppositions clearly explained. The observations of Meek et al. (1985) have clarified the pre-suppositions for the researchers and provided a rationalization and improvement of the approximations employed.

  13. Comparing three stochastic search algorithms for computational protein design: Monte Carlo, replica exchange Monte Carlo, and a multistart, steepest-descent heuristic.

    PubMed

    Mignon, David; Simonson, Thomas

    2016-07-15

    Computational protein design depends on an energy function and an algorithm to search the sequence/conformation space. We compare three stochastic search algorithms: a heuristic, Monte Carlo (MC), and a Replica Exchange Monte Carlo method (REMC). The heuristic performs a steepest-descent minimization starting from thousands of random starting points. The methods are applied to nine test proteins from three structural families, with a fixed backbone structure, a molecular mechanics energy function, and with 1, 5, 10, 20, 30, or all amino acids allowed to mutate. Results are compared to an exact, "Cost Function Network" method that identifies the global minimum energy conformation (GMEC) in favorable cases. The designed sequences accurately reproduce experimental sequences in the hydrophobic core. The heuristic and REMC agree closely and reproduce the GMEC when it is known, with a few exceptions. Plain MC performs well for most cases, occasionally departing from the GMEC by 3-4 kcal/mol. With REMC, the diversity of the sequences sampled agrees with exact enumeration where the latter is possible: up to 2 kcal/mol above the GMEC. Beyond, room temperature replicas sample sequences up to 10 kcal/mol above the GMEC, providing thermal averages and a solution to the inverse protein folding problem. © 2016 Wiley Periodicals, Inc. PMID:27197555

  14. Comparing three stochastic search algorithms for computational protein design: Monte Carlo, replica exchange Monte Carlo, and a multistart, steepest-descent heuristic.

    PubMed

    Mignon, David; Simonson, Thomas

    2016-07-15

    Computational protein design depends on an energy function and an algorithm to search the sequence/conformation space. We compare three stochastic search algorithms: a heuristic, Monte Carlo (MC), and a Replica Exchange Monte Carlo method (REMC). The heuristic performs a steepest-descent minimization starting from thousands of random starting points. The methods are applied to nine test proteins from three structural families, with a fixed backbone structure, a molecular mechanics energy function, and with 1, 5, 10, 20, 30, or all amino acids allowed to mutate. Results are compared to an exact, "Cost Function Network" method that identifies the global minimum energy conformation (GMEC) in favorable cases. The designed sequences accurately reproduce experimental sequences in the hydrophobic core. The heuristic and REMC agree closely and reproduce the GMEC when it is known, with a few exceptions. Plain MC performs well for most cases, occasionally departing from the GMEC by 3-4 kcal/mol. With REMC, the diversity of the sequences sampled agrees with exact enumeration where the latter is possible: up to 2 kcal/mol above the GMEC. Beyond, room temperature replicas sample sequences up to 10 kcal/mol above the GMEC, providing thermal averages and a solution to the inverse protein folding problem. © 2016 Wiley Periodicals, Inc.

  15. Introducing Stochastic Simulation of Chemical Reactions Using the Gillespie Algorithm and MATLAB: Revisited and Augmented

    ERIC Educational Resources Information Center

    Argoti, A.; Fan, L. T.; Cruz, J.; Chou, S. T.

    2008-01-01

    The stochastic simulation of chemical reactions, specifically, a simple reversible chemical reaction obeying the first-order, i.e., linear, rate law, has been presented by Martinez-Urreaga and his collaborators in this journal. The current contribution is intended to complement and augment their work in two aspects. First, the simple reversible…

  16. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  17. A data based random number generator for a multivariate distribution (using stochastic interpolation)

    NASA Technical Reports Server (NTRS)

    Thompson, J. R.; Taylor, M. S.

    1982-01-01

    Let X be a K-dimensional random variable serving as input for a system with output Y (not necessarily of dimension k). given X, an outcome Y or a distribution of outcomes G(Y/X) may be obtained either explicitly or implicity. The situation is considered in which there is a real world data set X sub j sub = 1 (n) and a means of simulating an outcome Y. A method for empirical random number generation based on the sample of observations of the random variable X without estimating the underlying density is discussed.

  18. Polarization of an electromagnetic wave in a randomly birefringent medium: a stochastic theory of the Stokes parameters.

    PubMed

    Botet, Robert; Kuratsuji, Hiroshi

    2010-03-01

    We present a framework for the stochastic features of the polarization state of an electromagnetic wave propagating through the optical medium with both deterministic (controlled) and disordered birefringence. In this case, the Stokes parameters obey a Langevin-type equation on the Poincaré sphere. The functional integral method provides for a natural tool to derive the Fokker-Planck equation for the probability distribution of the Stokes parameters. We solve the Fokker-Planck equation in the case of a random anisotropic active medium submitted to a homogeneous electromagnetic field. The possible dissipation and relaxation phenomena are studied in general and in various cases, and we give hints about how to validate experimentally the corresponding phenomenological equations.

  19. Polarization of an electromagnetic wave in a randomly birefringent medium: A stochastic theory of the Stokes parameters

    SciTech Connect

    Botet, Robert; Kuratsuji, Hiroshi

    2010-03-15

    We present a framework for the stochastic features of the polarization state of an electromagnetic wave propagating through the optical medium with both deterministic (controlled) and disordered birefringence. In this case, the Stokes parameters obey a Langevin-type equation on the Poincare sphere. The functional integral method provides for a natural tool to derive the Fokker-Planck equation for the probability distribution of the Stokes parameters. We solve the Fokker-Planck equation in the case of a random anisotropic active medium submitted to a homogeneous electromagnetic field. The possible dissipation and relaxation phenomena are studied in general and in various cases, and we give hints about how to validate experimentally the corresponding phenomenological equations.

  20. Computationally tractable stochastic image modeling based on symmetric Markov mesh random fields.

    PubMed

    Yousefi, Siamak; Kehtarnavaz, Nasser; Cao, Yan

    2013-06-01

    In this paper, the properties of a new class of causal Markov random fields, named symmetric Markov mesh random field, are initially discussed. It is shown that the symmetric Markov mesh random fields from the upper corners are equivalent to the symmetric Markov mesh random fields from the lower corners. Based on this new random field, a symmetric, corner-independent, and isotropic image model is then derived which incorporates the dependency of a pixel on all its neighbors. The introduced image model comprises the product of several local 1D density and 2D joint density functions of pixels in an image thus making it computationally tractable and practically feasible by allowing the use of histogram and joint histogram approximations to estimate the model parameters. An image restoration application is also presented to confirm the effectiveness of the model developed. The experimental results demonstrate that this new model provides an improved tool for image modeling purposes compared to the conventional Markov random field models.

  1. A stochastic model of vaccine trials for endemic infections using group randomization.

    PubMed Central

    Riggs, T. W.; Koopman, J. S.

    2004-01-01

    To clarify the determinants of vaccine trial power for non-typable Haemophilus influenzae, we constructed stochastic SIS models of infection transmission in small units (e.g. day-care centres) to calculate the equilibrium distribution of the number infected. We investigated how unit size, contact rate (modelled as a function of the unit size), external force of infection and infection duration affected the statistical power for detection of vaccine effects on susceptibility or infectiousness. Given a frequency-dependent contact rate, the prevalence, proportion of infections generated internally and the power to detect vaccine effects each increased slightly with unit size. Under a density-dependent model, unit size had much stronger effects. To maximize information allowing inference from vaccine trials, contact functions should be empirically evaluated by studying units of differing size and molecular methods should be used to help distinguish internal vs. external transmission. PMID:15473157

  2. Fault Detection of Aircraft System with Random Forest Algorithm and Similarity Measure

    PubMed Central

    Park, Wookje; Jung, Sikhang

    2014-01-01

    Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained. PMID:25057508

  3. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  4. A random fatigue of mechanize titanium abutment studied with Markoff chain and stochastic finite element formulation.

    PubMed

    Prados-Privado, María; Prados-Frutos, Juan Carlos; Calvo-Guirado, José Luis; Bea, José Antonio

    2016-11-01

    To measure fatigue in dental implants and in its components, it is necessary to use a probabilistic analysis since the randomness in the output depends on a number of parameters (such as fatigue properties of titanium and applied loads, unknown beforehand as they depend on mastication habits). The purpose is to apply a probabilistic approximation in order to predict fatigue life, taking into account the randomness of variables. More accuracy on the results has been obtained by taking into account different load blocks with different amplitudes, as happens with bite forces during the day and allowing us to know how effects have different type of bruxism on the piece analysed. PMID:27073012

  5. INSTRUCTIONAL CONFERENCE ON THE THEORY OF STOCHASTIC PROCESSES: Controlled random sequences and Markov chains

    NASA Astrophysics Data System (ADS)

    Yushkevich, A. A.; Chitashvili, R. Ya

    1982-12-01

    CONTENTSIntroduction Chapter I. Foundations of the general theory of controlled random sequences and Markov chains with the expected reward criterion § 1. Controlled random sequences, Markov chains, and models § 2. Necessary and sufficient conditions for optimality § 3. The Bellman equation for the value function and the existence of (ε-) optimal strategies Chapter II. Some problems in the theory of controlled homogeneous Markov chains § 4. Description of the solutions of the Bellman equation, a characterization of the value function, and the Bellman operator § 5. Sufficiency of stationary strategies in homogeneous Markov models § 6. The lexicographic Bellman equation References

  6. A random fatigue of mechanize titanium abutment studied with Markoff chain and stochastic finite element formulation.

    PubMed

    Prados-Privado, María; Prados-Frutos, Juan Carlos; Calvo-Guirado, José Luis; Bea, José Antonio

    2016-11-01

    To measure fatigue in dental implants and in its components, it is necessary to use a probabilistic analysis since the randomness in the output depends on a number of parameters (such as fatigue properties of titanium and applied loads, unknown beforehand as they depend on mastication habits). The purpose is to apply a probabilistic approximation in order to predict fatigue life, taking into account the randomness of variables. More accuracy on the results has been obtained by taking into account different load blocks with different amplitudes, as happens with bite forces during the day and allowing us to know how effects have different type of bruxism on the piece analysed.

  7. Solving the chemical master equation by a fast adaptive finite state projection based on the stochastic simulation algorithm.

    PubMed

    Sidje, R B; Vo, H D

    2015-11-01

    The mathematical framework of the chemical master equation (CME) uses a Markov chain to model the biochemical reactions that are taking place within a biological cell. Computing the transient probability distribution of this Markov chain allows us to track the composition of molecules inside the cell over time, with important practical applications in a number of areas such as molecular biology or medicine. However the CME is typically difficult to solve, since the state space involved can be very large or even countably infinite. We present a novel way of using the stochastic simulation algorithm (SSA) to reduce the size of the finite state projection (FSP) method. Numerical experiments that demonstrate the effectiveness of the reduction are included.

  8. Eigenvalue density of linear stochastic dynamical systems: A random matrix approach

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Pastur, L.; Lytova, A.; Du Bois, J.

    2012-02-01

    Eigenvalue problems play an important role in the dynamic analysis of engineering systems modeled using the theory of linear structural mechanics. When uncertainties are considered, the eigenvalue problem becomes a random eigenvalue problem. In this paper the density of the eigenvalues of a discretized continuous system with uncertainty is discussed by considering the model where the system matrices are the Wishart random matrices. An analytical expression involving the Stieltjes transform is derived for the density of the eigenvalues when the dimension of the corresponding random matrix becomes asymptotically large. The mean matrices and the dispersion parameters associated with the mass and stiffness matrices are necessary to obtain the density of the eigenvalues in the frameworks of the proposed approach. The applicability of a simple eigenvalue density function, known as the Marenko-Pastur (MP) density, is investigated. The analytical results are demonstrated by numerical examples involving a plate and the tail boom of a helicopter with uncertain properties. The new results are validated using an experiment on a vibrating plate with randomly attached spring-mass oscillators where 100 nominally identical samples are physically created and individually tested within a laboratory framework.

  9. Experimental implementation of a quantum random-walk search algorithm using strongly dipolar coupled spins

    SciTech Connect

    Lu Dawei; Peng Xinhua; Du Jiangfeng; Zhu Jing; Zou Ping; Yu Yihua; Zhang Shanmin; Chen Qun

    2010-02-15

    An important quantum search algorithm based on the quantum random walk performs an oracle search on a database of N items with O({radical}(phN)) calls, yielding a speedup similar to the Grover quantum search algorithm. The algorithm was implemented on a quantum information processor of three-qubit liquid-crystal nuclear magnetic resonance (NMR) in the case of finding 1 out of 4, and the diagonal elements' tomography of all the final density matrices was completed with comprehensible one-dimensional NMR spectra. The experimental results agree well with the theoretical predictions.

  10. Stochastic optimization of a cold atom experiment using a genetic algorithm

    SciTech Connect

    Rohringer, W.; Buecker, R.; Manz, S.; Betz, T.; Koller, Ch.; Goebel, M.; Perrin, A.; Schmiedmayer, J.; Schumm, T.

    2008-12-29

    We employ an evolutionary algorithm to automatically optimize different stages of a cold atom experiment without human intervention. This approach closes the loop between computer based experimental control systems and automatic real time analysis and can be applied to a wide range of experimental situations. The genetic algorithm quickly and reliably converges to the most performing parameter set independent of the starting population. Especially in many-dimensional or connected parameter spaces, the automatic optimization outperforms a manual search.

  11. Development of Semi-Stochastic Algorithm for Optimizing Alloy Composition of High-Temperature Austenitic Stainless Steels (H-Series) for Desired Mechanical and Corrosion Properties.

    SciTech Connect

    Dulikravich, George S.; Sikka, Vinod K.; Muralidharan, G.

    2006-06-01

    The goal of this project was to adapt and use an advanced semi-stochastic algorithm for constrained multiobjective optimization and combine it with experimental testing and verification to determine optimum concentrations of alloying elements in heat-resistant and corrosion-resistant H-series stainless steel alloys that will simultaneously maximize a number of alloy's mechanical and corrosion properties.

  12. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study

    NASA Astrophysics Data System (ADS)

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck–chest–abdomen–pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21

  13. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study

    NASA Astrophysics Data System (ADS)

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21

  14. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study.

    PubMed

    Polan, Daniel F; Brady, Samuel L; Kaufman, Robert A

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 (n) , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21

  15. Stochastic analysis of the lateral-torsional buckling resistance of steel beams with random imperfections

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2013-10-01

    The paper deals with the statistical analysis of resistance of a hot-rolled steel IPE beam under major axis bending. The lateral-torsional buckling stability problem of imperfect beam is described. The influence of bending moments and warping torsion on the ultimate limit state of the IPE beam with random imperfections is analyzed. The resistance is calculated by means of the close form solution. The initial geometrical imperfections of the beam are considered as the formatively identical to the first eigen mode of buckling. Changes of mean values of the resistance, of mean values of internal bending moments, of the variance of resistance and of the variance of internal bending moments were studied in dependence on the beam non-dimensional slenderness. The values of non-dimensional slenderness for which the statistical characteristics of internal moments associated with random resistance are maximal were determined.

  16. An efficient, three-dimensional, anisotropic, fractional Brownian motion and truncated fractional Levy motion simulation algorithm based on successive random additions

    NASA Astrophysics Data System (ADS)

    Lu, Silong; Molz, Fred J.; Liu, Hui Hai

    2003-02-01

    Fluid flow and solute transport in the subsurface are known to be strongly influenced by the heterogeneity of aquifers. To simulate aquifer properties, such as logarithmic hydraulic conductivity (ln( K)) variations, fractional Brownian motion (fBm) and truncated fractional Levy motion (fLm) were suggested previously. In this paper, an efficient three-dimensional successive random additions (SRA) algorithm is presented to construct spatial ln( K) distributions. A convenient conditioning procedure using the inverse-distance-weighting method as a data interpolator, which forces the generated fBm or truncated fLm realization to go through known data points, is included also. The proposed method coded in the FORTRAN language, and a complementary code for verifying fractal structure in fBm realizations based on dispersional analysis, are validated carefully through numerical tests. These software packages allow one to go beyond the stationary stochastic process hydrology of the 1980s to the new geo-statistics of non-stationary stochastic processes with stationary increments, as embodied by the stochastic fractals fBm, fLm and their associated increments fGn and fLn.

  17. Algorithms for adaptive stochastic control for a class of linear systems

    NASA Technical Reports Server (NTRS)

    Toda, M.; Patel, R. V.

    1977-01-01

    Control of linear, discrete time, stochastic systems with unknown control gain parameters is discussed. Two suboptimal adaptive control schemes are derived: one is based on underestimating future control and the other is based on overestimating future control. Both schemes require little on-line computation and incorporate in their control laws some information on estimation errors. The performance of these laws is studied by Monte Carlo simulations on a computer. Two single input, third order systems are considered, one stable and the other unstable, and the performance of the two adaptive control schemes is compared with that of the scheme based on enforced certainty equivalence and the scheme where the control gain parameters are known.

  18. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    NASA Astrophysics Data System (ADS)

    Atanassov, E.; Dimitrov, D.; Gurov, T.

    2015-10-01

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  19. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    SciTech Connect

    Atanassov, E.; Dimitrov, D. E-mail: emanouil@parallel.bas.bg Gurov, T.

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  20. Bad News Comes in Threes: Stochastic Structure in Random Events (Invited)

    NASA Astrophysics Data System (ADS)

    Newman, W. I.; Turcotte, D. L.; Malamud, B. D.

    2013-12-01

    Plots of random numbers have been known for nearly a century to show repetitive peak-to-peak sequences with an average length of 3. Geophysical examples include events such as earthquakes, geyser eruptions, and magnetic substorms. We consider a classic model in statistical physics, the Langevin equation x[n+1] = α*x[n] + η[n], where x[n] is the nth value of a measured quantity and η[n] is a random number, commonly a Gaussian white noise. Here, α is a parameter that ranges from 0, corresponding to independent random data, to 1, corresponding to Brownian motion which preserves memory of past steps. We show that, for α = 0, the mean peak-to-peak sequence length is 3 while, for α = 1, the mean sequence length is 4. We obtain the physical and mathematical properties of this model, including the distribution of peak-to-peak sequence lengths that can be expected. We compare the theory with observations of earthquake magnitudes emerging from large events, observations of the auroral electrojet index as a measure of global electrojet activity, and time intervals observed between successive eruptions of Old Faithful Geyser in Yellowstone National Park. We demonstrate that the largest earthquake events as described by their magnitudes are consistent with our theory for α = 0, thereby confronting the aphorism (and our analytic theory) that "bad news comes in threes." Electrojet activity, on the other hand, demonstrates some memory effects, consistent with the intuitive picture of the magnetosphere presenting a capacitor-plate like system that preserves memory. Old Faithful Geyser, finally, shows strong antipersistence effects between successive events, i.e. long-time intervals are followed by short ones, and vice versa. As an additional application, we apply our theory to the observed 3-4 year mammalian population cycles.

  1. High-resolution climate data over conterminous US using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Hashimoto, H.; Nemani, R. R.; Wang, W.

    2014-12-01

    We developed a new methodology to create high-resolution precipitation data using the random forest algorithm. We have used two approaches: physical downscaling from GCM data using a regional climate model, and interpolation from ground observation data. Physical downscaling method can be applied only for a small region because it is computationally expensive and complex to deploy. On the other hand, interpolation schemes from ground observations do not consider physical processes. In this study, we utilized the random forest algorithm to integrate atmospheric reanalysis data, satellite data, topography data, and ground observation data. First we considered situations where precipitation is same across the domain, largely dominated by storm like systems. We then picked several points to train random forest algorithm. The random forest algorithm estimates out-of-bag errors spatially, and produces the relative importance of each of the input variable.This methodology has the following advantages. (1) The methodology can ingest any spatial dataset to improve downscaling. Even non-precipitation datasets can be ingested such as satellite cloud cover data, radar reflectivity image, or modeled convective available potential energy. (2) The methodology is purely statistical so that physical assumptions are not required. Meanwhile, most of interpolation schemes assume empirical relationship between precipitation and elevation for orographic precipitation. (3) Low quality value in ingested data does not cause critical bias in the results because of the ensemble feature of random forest. Therefore, users do not need to pay a special attention to quality control of input data compared to other interpolation methodologies. (4) Same methodology can be applied to produce other high-resolution climate datasets, such as wind and cloud cover. Those variables are usually hard to be interpolated by conventional algorithms. In conclusion, the proposed methodology can produce reasonable

  2. Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm

    PubMed Central

    He, Xiaoqi; Zheng, Zizhao; Hu, Chao

    2015-01-01

    The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg–Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher “denoise” capacity, with a larger range for initial guess values. PMID:25914561

  3. Mathematical algorithm development and parametric studies with the GEOFRAC three-dimensional stochastic model of natural rock fracture systems

    NASA Astrophysics Data System (ADS)

    Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.

    2014-06-01

    This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.

  4. A Stochastic, Resonance-Free Multiple Time-Step Algorithm for Polarizable Models That Permits Very Large Time Steps.

    PubMed

    Margul, Daniel T; Tuckerman, Mark E

    2016-05-10

    Molecular dynamics remains one of the most widely used computational tools in the theoretical molecular sciences to sample an equilibrium ensemble distribution and/or to study the dynamical properties of a system. The efficiency of a molecular dynamics calculation is limited by the size of the time step that can be employed, which is dictated by the highest frequencies in the system. However, many properties of interest are connected to low-frequency, long time-scale phenomena, requiring many small time steps to capture. This ubiquitous problem can be ameliorated by employing multiple time-step algorithms, which assign different time steps to forces acting on different time scales. In such a scheme, fast forces are evaluated more frequently than slow forces, and as the former are often computationally much cheaper to evaluate, the savings can be significant. Standard multiple time-step approaches are limited, however, by resonance phenomena, wherein motion on the fastest time scales limits the step sizes that can be chosen for the slower time scales. In atomistic models of biomolecular systems, for example, the largest time step is typically limited to around 5 fs. Previously, we introduced an isokinetic extended phase-space algorithm (Minary et al. Phys. Rev. Lett. 2004, 93, 150201) and its stochastic analog (Leimkuhler et al. Mol. Phys. 2013, 111, 3579) that eliminate resonance phenomena through a set of kinetic energy constraints. In simulations of a fixed-charge flexible model of liquid water, for example, the time step that could be assigned to the slow forces approached 100 fs. In this paper, we develop a stochastic isokinetic algorithm for multiple time-step molecular dynamics calculations using a polarizable model based on fluctuating dipoles. The scheme developed here employs two sets of induced dipole moments, specifically, those associated with short-range interactions and those associated with a full set of interactions. The scheme is demonstrated on

  5. Vaccine enhanced extinction in stochastic epidemic models

    NASA Astrophysics Data System (ADS)

    Billings, Lora; Mier-Y-Teran, Luis; Schwartz, Ira

    2012-02-01

    We address the problem of developing new and improved stochastic control methods that enhance extinction in disease models. In finite populations, extinction occurs when fluctuations owing to random transitions act as an effective force that drives one or more components or species to vanish. Using large deviation theory, we identify the location of the optimal path to extinction in epidemic models with stochastic vaccine controls. These models not only capture internal noise from random transitions, but also external fluctuations, such as stochastic vaccination scheduling. We quantify the effectiveness of the randomly applied vaccine over all possible distributions by using the location of the optimal path, and we identify the most efficient control algorithms. We also discuss how mean extinction times scale with epidemiological and social parameters.

  6. MRFy: Remote Homology Detection for Beta-Structural Proteins Using Markov Random Fields and Stochastic Search.

    PubMed

    Daniels, Noah M; Gallant, Andrew; Ramsey, Norman; Cowen, Lenore J

    2015-01-01

    We introduce MRFy, a tool for protein remote homology detection that captures beta-strand dependencies in the Markov random field. Over a set of 11 SCOP beta-structural superfamilies, MRFy shows a 14 percent improvement in mean Area Under the Curve for the motif recognition problem as compared to HMMER, 25 percent improvement as compared to RAPTOR, 14 percent improvement as compared to HHPred, and a 18 percent improvement as compared to CNFPred and RaptorX. MRFy was implemented in the Haskell functional programming language, and parallelizes well on multi-core systems. MRFy is available, as source code as well as an executable, from http://mrfy.cs.tufts.edu/.

  7. Biased Random-Key Genetic Algorithms for the Winner Determination Problem in Combinatorial Auctions.

    PubMed

    de Andrade, Carlos Eduardo; Toso, Rodrigo Franco; Resende, Mauricio G C; Miyazawa, Flávio Keidi

    2015-01-01

    In this paper we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.

  8. Autoclassification of the Variable 3XMM Sources Using the Random Forest Machine Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K.

    2015-11-01

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of a random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.

  9. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis.

    PubMed

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  10. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    PubMed Central

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  11. Fast randomized Hough transformation track initiation algorithm based on multi-scale clustering

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Chen, Qian; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    A fast randomized Hough transformation track initiation algorithm based on multi-scale clustering is proposed to overcome existing problems in traditional infrared search and track system(IRST) which cannot provide movement information of the initial target and select the threshold value of correlation automatically by a two-dimensional track association algorithm based on bearing-only information . Movements of all the targets are presumed to be uniform rectilinear motion throughout this new algorithm. Concepts of space random sampling, parameter space dynamic linking table and convergent mapping of image to parameter space are developed on the basis of fast randomized Hough transformation. Considering the phenomenon of peak value clustering due to shortcomings of peak detection itself which is built on threshold value method, accuracy can only be ensured on condition that parameter space has an obvious peak value. A multi-scale idea is added to the above-mentioned algorithm. Firstly, a primary association is conducted to select several alternative tracks by a low-threshold .Then, alternative tracks are processed by multi-scale clustering methods , through which accurate numbers and parameters of tracks are figured out automatically by means of transforming scale parameters. The first three frames are processed by this algorithm in order to get the first three targets of the track , and then two slightly different gate radius are worked out , mean value of which is used to be the global threshold value of correlation. Moreover, a new model for curvilinear equation correction is applied to the above-mentioned track initiation algorithm for purpose of solving the problem of shape distortion when a space three-dimensional curve is mapped to a two-dimensional bearing-only space. Using sideways-flying, launch and landing as examples to build models and simulate, the application of the proposed approach in simulation proves its effectiveness , accuracy , and adaptivity

  12. Nonconvergence of the Wang-Landau algorithms with multiple random walkers

    NASA Astrophysics Data System (ADS)

    Belardinelli, R. E.; Pereyra, V. D.

    2016-05-01

    This paper discusses some convergence properties in the entropic sampling Monte Carlo methods with multiple random walkers, particularly in the Wang-Landau (WL) and 1 /t algorithms. The classical algorithms are modified by the use of m -independent random walkers in the energy landscape to calculate the density of states (DOS). The Ising model is used to show the convergence properties in the calculation of the DOS, as well as the critical temperature, while the calculation of the number π by multiple dimensional integration is used in the continuum approximation. In each case, the error is obtained separately for each walker at a fixed time, t ; then, the average over m walkers is performed. It is observed that the error goes as 1 /√{m } . However, if the number of walkers increases above a certain critical value m >mx , the error reaches a constant value (i.e., it saturates). This occurs for both algorithms; however, it is shown that for a given system, the 1 /t algorithm is more efficient and accurate than the similar version of the WL algorithm. It follows that it makes no sense to increase the number of walkers above a critical value mx, since it does not reduce the error in the calculation. Therefore, the number of walkers does not guarantee convergence.

  13. Reducing the variability in random-phase initialized Gerchberg-Saxton Algorithm

    NASA Astrophysics Data System (ADS)

    Salgado-Remacha, Francisco Javier

    2016-11-01

    Gerchberg-Saxton Algorithm is a common tool for designing Computer Generated Holograms. There exist some standard functions for evaluating the quality of the final results. However, the use of randomized initial guess leads to different results, increasing the variability of the evaluation functions values. This fact is especially detrimental when the computing time is elevated. In this work, a new tool is presented, able to describe the fidelity of the results with a notably reduced variability after multiple attempts of the Gerchberg-Saxton Algorithm. This new tool results very helpful for topical fields such as 3D digital holography.

  14. Parrallel Implementation of Fast Randomized Algorithms for Low Rank Matrix Decomposition

    SciTech Connect

    Lucas, Andrew J.; Stalizer, Mark; Feo, John T.

    2014-03-01

    We analyze the parallel performance of randomized interpolative decomposition by de- composing low rank complex-valued Gaussian random matrices larger than 100 GB. We chose a Cray XMT supercomputer as it provides an almost ideal PRAM model permitting quick investigation of parallel algorithms without obfuscation from hardware idiosyncrasies. We obtain that on non-square matrices performance scales almost linearly with runtime about 100 times faster on 128 processors. We also verify that numerically discovered error bounds still hold on matrices two orders of magnitude larger than those previously tested.

  15. Rotorcraft Blade Mode Damping Identification from Random Responses Using a Recursive Maximum Likelihood Algorithm

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.

    1982-01-01

    An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.

  16. On the convergence of EM-like algorithms for image segmentation using Markov random fields.

    PubMed

    Roche, Alexis; Ribes, Delphine; Bach-Cuadra, Meritxell; Krüger, Gunnar

    2011-12-01

    Inference of Markov random field images segmentation models is usually performed using iterative methods which adapt the well-known expectation-maximization (EM) algorithm for independent mixture models. However, some of these adaptations are ad hoc and may turn out numerically unstable. In this paper, we review three EM-like variants for Markov random field segmentation and compare their convergence properties both at the theoretical and practical levels. We specifically advocate a numerical scheme involving asynchronous voxel updating, for which general convergence results can be established. Our experiments on brain tissue classification in magnetic resonance images provide evidence that this algorithm may achieve significantly faster convergence than its competitors while yielding at least as good segmentation results.

  17. Enhancing network robustness against targeted and random attacks using a memetic algorithm

    NASA Astrophysics Data System (ADS)

    Tang, Xianglong; Liu, Jing; Zhou, Mingxing

    2015-08-01

    In the past decades, there has been much interest in the elasticity of infrastructures to targeted and random attacks. In the recent work by Schneider C. M. et al., Proc. Natl. Acad. Sci. U.S.A., 108 (2011) 3838, the authors proposed an effective measure (namely R, here we label it as R t to represent the measure for targeted attacks) to evaluate network robustness against targeted node attacks. Using a greedy algorithm, they found that the optimal structure is an onion-like one. However, real systems are often under threats of both targeted attacks and random failures. So, enhancing networks robustness against both targeted and random attacks is of great importance. In this paper, we first design a random-robustness index (Rr) . We find that the onion-like networks destroyed the original strong ability of BA networks in resisting random attacks. Moreover, the structure of an R r -optimized network is found to be different from that of an onion-like network. To design robust scale-free networks (RSF) which are resistant to both targeted and random attacks (TRA) without changing the degree distribution, a memetic algorithm (MA) is proposed, labeled as \\textit{MA-RSF}\\textit{TRA} . In the experiments, both synthetic scale-free networks and real-world networks are used to validate the performance of \\textit{MA-RSF}\\textit{TRA} . The results show that \\textit{MA-RSF} \\textit{TRA} has a great ability in searching for the most robust network structure that is resistant to both targeted and random attacks.

  18. Downscaling stream flow time series from monthly to daily scales using an auto-regressive stochastic algorithm: StreamFARM

    NASA Astrophysics Data System (ADS)

    Rebora, N.; Silvestro, F.; Rudari, R.; Herold, C.; Ferraris, L.

    2016-06-01

    Downscaling methods are used to derive stream flow at a high temporal resolution from a data series that has a coarser time resolution. These algorithms are useful for many applications, such as water management and statistical analysis, because in many cases stream flow time series are available with coarse temporal steps (monthly), especially when considering historical data; however, in many cases, data that have a finer temporal resolution are needed (daily). In this study, we considered a simple but efficient stochastic auto-regressive model that is able to downscale the available stream flow data from monthly to daily time resolution and applied it to a large dataset that covered the entire North and Central American continent. Basins with different drainage areas and different hydro-climatic characteristics were considered, and the results show the general good ability of the analysed model to downscale monthly stream flows to daily stream flows, especially regarding the reproduction of the annual maxima. If the performance in terms of the reproduction of hydrographs and duration curves is considered, better results are obtained for those cases in which the hydrologic regime is such that the annual maxima stream flow show low or medium variability, which means that they have a low or medium coefficient of variation; however, when the variability increases, the performance of the model decreases.

  19. The backtracking survey propagation algorithm for solving random K-SAT problems

    PubMed Central

    Marino, Raffaele; Parisi, Giorgio; Ricci-Tersenghi, Federico

    2016-01-01

    Discrete combinatorial optimization has a central role in many scientific disciplines, however, for hard problems we lack linear time algorithms that would allow us to solve very large instances. Moreover, it is still unclear what are the key features that make a discrete combinatorial optimization problem hard to solve. Here we study random K-satisfiability problems with K=3,4, which are known to be very hard close to the SAT-UNSAT threshold, where problems stop having solutions. We show that the backtracking survey propagation algorithm, in a time practically linear in the problem size, is able to find solutions very close to the threshold, in a region unreachable by any other algorithm. All solutions found have no frozen variables, thus supporting the conjecture that only unfrozen solutions can be found in linear time, and that a problem becomes impossible to solve in linear time when all solutions contain frozen variables. PMID:27694952

  20. The backtracking survey propagation algorithm for solving random K-SAT problems

    NASA Astrophysics Data System (ADS)

    Marino, Raffaele; Parisi, Giorgio; Ricci-Tersenghi, Federico

    2016-10-01

    Discrete combinatorial optimization has a central role in many scientific disciplines, however, for hard problems we lack linear time algorithms that would allow us to solve very large instances. Moreover, it is still unclear what are the key features that make a discrete combinatorial optimization problem hard to solve. Here we study random K-satisfiability problems with K=3,4, which are known to be very hard close to the SAT-UNSAT threshold, where problems stop having solutions. We show that the backtracking survey propagation algorithm, in a time practically linear in the problem size, is able to find solutions very close to the threshold, in a region unreachable by any other algorithm. All solutions found have no frozen variables, thus supporting the conjecture that only unfrozen solutions can be found in linear time, and that a problem becomes impossible to solve in linear time when all solutions contain frozen variables.

  1. A randomized algorithm for two-cluster partition of a set of vectors

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Khandeev, V. I.

    2015-02-01

    A randomized algorithm is substantiated for the strongly NP-hard problem of partitioning a finite set of vectors of Euclidean space into two clusters of given sizes according to the minimum-of-the sum-of-squared-distances criterion. It is assumed that the centroid of one of the clusters is to be optimized and is determined as the mean value over all vectors in this cluster. The centroid of the other cluster is fixed at the origin. For an established parameter value, the algorithm finds an approximate solution of the problem in time that is linear in the space dimension and the input size of the problem for given values of the relative error and failure probability. The conditions are established under which the algorithm is asymptotically exact and runs in time that is linear in the space dimension and quadratic in the input size of the problem.

  2. Representation of high frequency Space Shuttle data by ARMA algorithms and random response spectra

    NASA Technical Reports Server (NTRS)

    Spanos, P. D.; Mushung, L. J.

    1990-01-01

    High frequency Space Shuttle lift-off data are treated by autoregressive (AR) and autoregressive-moving-average (ARMA) digital algorithms. These algorithms provide useful information on the spectral densities of the data. Further, they yield spectral models which lend themselves to incorporation to the concept of the random response spectrum. This concept yields a reasonably smooth power spectrum for the design of structural and mechanical systems when the available data bank is limited. Due to the non-stationarity of the lift-off event, the pertinent data are split into three slices. Each of the slices is associated with a rather distinguishable phase of the lift-off event, where stationarity can be expected. The presented results are rather preliminary in nature; it is aimed to call attention to the availability of the discussed digital algorithms and to the need to augment the Space Shuttle data bank as more flights are completed.

  3. A Note on the Behavior of the Randomized Kaczmarz Algorithm of Strohmer and Vershynin.

    PubMed

    Censor, Yair; Herman, Gabor T; Jiang, Ming

    2009-08-01

    In a recent paper by T. Strohmer and R. Vershynin ["A Randomized Kaczmarz Algorithm with Exponential Convergence", Journal of Fourier Analysis and Applications, published online on April 25, 2008] a "randomized Kaczmarz algorithm" is proposed for solving systems of linear equations [Formula: see text] . In that algorithm the next equation to be used in an iterative Kaczmarz process is selected with a probability proportional to ‖a(i)‖ (2). The paper illustrates the superiority of this selection method for the reconstruction of a bandlimited function from its nonuniformly spaced sampling values.In this note we point out that the reported success of the algorithm of Strohmer and Vershynin in their numerical simulation depends on the specific choices that are made in translating the underlying problem, whose geometrical nature is "find a common point of a set of hyperplanes", into a system of algebraic equations. If this translation is carefully done, as in the numerical simulation provided by Strohmer and Vershynin for the reconstruction of a bandlimited function from its nonuniformly spaced sampling values, then indeed good performance may result. However, there will always be legitimate algebraic representations of the underlying problem (so that the set of solutions of the system of algebraic equations is exactly the set of points in the intersection of the hyperplanes), for which the selection method of Strohmer and Vershynin will perform in an inferior manner.

  4. Three-Dimensional Analysis of the Effect of Material Randomness on the Damage Behaviour of CFRP Laminates with Stochastic Cohesive-Zone Elements

    NASA Astrophysics Data System (ADS)

    Khokhar, Zahid R.; Ashcroft, Ian A.; Silberschmidt, Vadim V.

    2014-02-01

    Laminated carbon fibre-reinforced polymer (CFRP) composites are already well established in structural applications where high specific strength and stiffness are required. Damage in these laminates is usually localised and may involve numerous mechanisms, such as matrix cracking, laminate delamination, fibre de-bonding or fibre breakage. Microstructures in CFRPs are non-uniform and irregular, resulting in an element of randomness in the localised damage. This may in turn affect the global properties and failure parameters of components made of CFRPs. This raises the question of whether the inherent stochasticity of localised damage is of significance in terms of the global properties and design methods for such materials. This paper presents a numerical modelling based analysis of the effect of material randomness on delamination damage in CFRP materials by the implementation of a stochastic cohesive-zone model (CZM) within the framework of the finite-element (FE) method. The initiation and propagation of delamination in a unidirectional CFRP double-cantilever beam (DCB) specimen loaded under mode-I was analyzed, accounting for the inherent microstructural stochasticity exhibited by such laminates via the stochastic CZM. Various statistical realizations for a half-scatter of 50 % of fracture energy were performed, with a probability distribution based on Weibull's two-parameter probability density function. The damaged area and the crack lengths in laminates were analyzed, and the results showed higher values of those parameters for random realizations compared to the uniform case for the same levels of applied displacement. This indicates that deterministic analysis of composites using average properties may be non-conservative and a method based on probability may be more appropriate.

  5. What a difference a parameter makes: a psychophysical comparison of random dot motion algorithms.

    PubMed

    Pilly, Praveen K; Seitz, Aaron R

    2009-06-01

    Random dot motion (RDM) displays have emerged as one of the standard stimulus types employed in psychophysical and physiological studies of motion processing. RDMs are convenient because it is straightforward to manipulate the relative motion energy for a given motion direction in addition to stimulus parameters such as the speed, contrast, duration, density, aperture, etc. However, as widely as RDMs are employed so do they vary in their details of implementation. As a result, it is often difficult to make direct comparisons across studies employing different RDM algorithms and parameters. Here, we systematically measure the ability of human subjects to estimate motion direction for four commonly used RDM algorithms under a range of parameters in order to understand how these different algorithms compare in their perceptibility. We find that parametric and algorithmic differences can produce dramatically different performances. These effects, while surprising, can be understood in relationship to pertinent neurophysiological data regarding spatiotemporal displacement tuning properties of cells in area MT and how the tuning function changes with stimulus contrast and retinal eccentricity. These data help give a baseline by which different RDM algorithms can be compared, demonstrate a need for clearly reporting RDM details in the methods of papers, and also pose new constraints and challenges to models of motion direction processing.

  6. Randomized algorithms for stability and robustness analysis of high-speed communication networks.

    PubMed

    Alpcan, Tansu; Başar, Tamer; Tempo, Roberto

    2005-09-01

    This paper initiates a study toward developing and applying randomized algorithms for stability of high-speed communication networks. The focus is on congestion and delay-based flow controllers for sources, which are "utility maximizers" for individual users. First, we introduce a nonlinear algorithm for such source flow controllers, which uses as feedback aggregate congestion and delay information from bottleneck nodes of the network, and depends on a number of parameters, among which are link capacities, user preference for utility, and pricing. We then linearize this nonlinear model around its unique equilibrium point and perform a robustness analysis for a special symmetric case with a single bottleneck node. The "symmetry" here captures the scenario when certain utility and pricing parameters are the same across all active users, for which we derive closed-form necessary and sufficient conditions for stability and robustness under parameter variations. In addition, the ranges of values for the utility and pricing parameters for which stability is guaranteed are computed exactly. These results also admit counterparts for the case when the pricing parameters vary across users, but the utility parameter values are still the same. In the general nonsymmetric case, when closed-form derivation is not possible, we construct specific randomized algorithms which provide a probabilistic estimate of the local stability of the network. In particular, we use Monte Carlo as well as quasi-Monte Carlo techniques for the linearized model. The results obtained provide a complete analysis of congestion control algorithms for internet style networks with a single bottleneck node as well as for networks with general random topologies. PMID:16252829

  7. Cooperative effects of inherent stochasticity and random long-range connections on synchronization and coherence resonance in diffusively coupled calcium oscillators

    NASA Astrophysics Data System (ADS)

    Wang, Maosheng; Sun, Runzhi

    2014-03-01

    The cooperative effects of inherent stochasticity and random long-range connections (RLRCs) on synchronization and coherence resonance in networks of calcium oscillators have been investigated. Two different types of collective behaviors, coherence resonance (CR) and synchronization, have been studied numerically in the context of chemical Langevin equations (CLEs). In the CLEs, the reaction steps are all stochastic, including the exchange of calcium ions between adjacent and non-adjacent cells through the gap junctions. The calcium oscillators’ synchronization was characterized by the standard deviation of the cytosolic calcium concentrations. Meanwhile, the temporal coherence of the calcium spike train was characterized by the reciprocal coefficient of variance (RCV). Synchronization induced by RLRCs was observed, namely, the exchange of calcium ions between non-adjacent cells can promote the synchronization of the cells. Moreover, it was found that the RCV shows a clear peak when both inherent stochasticity and RLRCs are optimal, indicating the existence of CR. Since inherent stochasticity and RLRCs are two essential ingredients of cellular processes, synchronization and CR are also important for cells’ functions. The results reported in this paper are expected to be useful for understanding the dynamics of intercellular calcium signaling processes in vivo.

  8. Investigation and appreciation of optimal output feedback. Volume 1: A convergent algorithm for the stochastic infinite-time discrete optimal output feedback problem

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.

  9. Effects of time delay and random rewiring on the stochastic resonance in excitable small-world neuronal networks

    NASA Astrophysics Data System (ADS)

    Yu, Haitao; Wang, Jiang; Du, Jiwei; Deng, Bin; Wei, Xile; Liu, Chen

    2013-05-01

    The effects of time delay and rewiring probability on stochastic resonance and spatiotemporal order in small-world neuronal networks are studied in this paper. Numerical results show that, irrespective of the pacemaker introduced to one single neuron or all neurons of the network, the phenomenon of stochastic resonance occurs. The time delay in the coupling process can either enhance or destroy stochastic resonance on small-world neuronal networks. In particular, appropriately tuned delays can induce multiple stochastic resonances, which appear intermittently at integer multiples of the oscillation period of the pacemaker. More importantly, it is found that the small-world topology can significantly affect the stochastic resonance on excitable neuronal networks. For small time delays, increasing the rewiring probability can largely enhance the efficiency of pacemaker-driven stochastic resonance. We argue that the time delay and the rewiring probability both play a key role in determining the ability of the small-world neuronal network to improve the noise-induced outreach of the localized subthreshold pacemaker.

  10. Effects of time delay and random rewiring on the stochastic resonance in excitable small-world neuronal networks.

    PubMed

    Yu, Haitao; Wang, Jiang; Du, Jiwei; Deng, Bin; Wei, Xile; Liu, Chen

    2013-05-01

    The effects of time delay and rewiring probability on stochastic resonance and spatiotemporal order in small-world neuronal networks are studied in this paper. Numerical results show that, irrespective of the pacemaker introduced to one single neuron or all neurons of the network, the phenomenon of stochastic resonance occurs. The time delay in the coupling process can either enhance or destroy stochastic resonance on small-world neuronal networks. In particular, appropriately tuned delays can induce multiple stochastic resonances, which appear intermittently at integer multiples of the oscillation period of the pacemaker. More importantly, it is found that the small-world topology can significantly affect the stochastic resonance on excitable neuronal networks. For small time delays, increasing the rewiring probability can largely enhance the efficiency of pacemaker-driven stochastic resonance. We argue that the time delay and the rewiring probability both play a key role in determining the ability of the small-world neuronal network to improve the noise-induced outreach of the localized subthreshold pacemaker.

  11. Hyperspectral image clustering method based on artificial bee colony algorithm and Markov random fields

    NASA Astrophysics Data System (ADS)

    Sun, Xu; Yang, Lina; Gao, Lianru; Zhang, Bing; Li, Shanshan; Li, Jun

    2015-01-01

    Center-oriented hyperspectral image clustering methods have been widely applied to hyperspectral remote sensing image processing; however, the drawbacks are obvious, including the over-simplicity of computing models and underutilized spatial information. In recent years, some studies have been conducted trying to improve this situation. We introduce the artificial bee colony (ABC) and Markov random field (MRF) algorithms to propose an ABC-MRF-cluster model to solve the problems mentioned above. In this model, a typical ABC algorithm framework is adopted in which cluster centers and iteration conditional model algorithm's results are considered as feasible solutions and objective functions separately, and MRF is modified to be capable of dealing with the clustering problem. Finally, four datasets and two indices are used to show that the application of ABC-cluster and ABC-MRF-cluster methods could help to obtain better image accuracy than conventional methods. Specifically, the ABC-cluster method is superior when used for a higher power of spectral discrimination, whereas the ABC-MRF-cluster method can provide better results when used for an adjusted random index. In experiments on simulated images with different signal-to-noise ratios, ABC-cluster and ABC-MRF-cluster showed good stability.

  12. Stochastic differential equations

    SciTech Connect

    Sobczyk, K. )

    1990-01-01

    This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshore structures.

  13. VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm

    NASA Astrophysics Data System (ADS)

    Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo

    2015-01-01

    Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.

  14. A well-posed and stable stochastic Galerkin formulation of the incompressible Navier-Stokes equations with random data

    NASA Astrophysics Data System (ADS)

    Pettersson, Per; Nordström, Jan; Doostan, Alireza

    2016-02-01

    We present a well-posed stochastic Galerkin formulation of the incompressible Navier-Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimate for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.

  15. Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm.

    PubMed

    Bossard, Jeremy A; Lin, Lan; Werner, Douglas H

    2016-01-01

    Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as 'chaotic', but we propose that apparent 'chaotic' natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too 'perfect' to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the 'chaotic' (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and 'chaotic' superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime.

  16. Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm

    PubMed Central

    Bossard, Jeremy A.; Lin, Lan; Werner, Douglas H.

    2016-01-01

    Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as ‘chaotic’, but we propose that apparent ‘chaotic’ natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too ‘perfect’ to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the ‘chaotic’ (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and ‘chaotic’ superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime. PMID:26763335

  17. Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm.

    PubMed

    Bossard, Jeremy A; Lin, Lan; Werner, Douglas H

    2016-01-01

    Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as 'chaotic', but we propose that apparent 'chaotic' natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too 'perfect' to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the 'chaotic' (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and 'chaotic' superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime. PMID:26763335

  18. Cooperative mobile agents search using beehive partitioned structure and Tabu Random search algorithm

    NASA Astrophysics Data System (ADS)

    Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.

    2013-05-01

    In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.

  19. Fault diagnosis in spur gears based on genetic algorithm and random forest

    NASA Astrophysics Data System (ADS)

    Cerrada, Mariela; Zurita, Grover; Cabrera, Diego; Sánchez, René-Vinicio; Artés, Mariano; Li, Chuan

    2016-03-01

    There are growing demands for condition-based monitoring of gearboxes, and therefore new methods to improve the reliability, effectiveness, accuracy of the gear fault detection ought to be evaluated. Feature selection is still an important aspect in machine learning-based diagnosis in order to reach good performance of the diagnostic models. On the other hand, random forest classifiers are suitable models in industrial environments where large data-samples are not usually available for training such diagnostic models. The main aim of this research is to build up a robust system for the multi-class fault diagnosis in spur gears, by selecting the best set of condition parameters on time, frequency and time-frequency domains, which are extracted from vibration signals. The diagnostic system is performed by using genetic algorithms and a classifier based on random forest, in a supervised environment. The original set of condition parameters is reduced around 66% regarding the initial size by using genetic algorithms, and still get an acceptable classification precision over 97%. The approach is tested on real vibration signals by considering several fault classes, one of them being an incipient fault, under different running conditions of load and velocity.

  20. Improved random-starting method for the EM algorithm for finite mixtures of regressions.

    PubMed

    Schepers, Jan

    2015-03-01

    Two methods for generating random starting values for the expectation maximization (EM) algorithm are compared in terms of yielding maximum likelihood parameter estimates in finite mixtures of regressions. One of these methods is ubiquitous in applications of finite mixture regression, whereas the other method is an alternative that appears not to have been used so far. The two methods are compared in two simulation studies and on an illustrative data set. The results show that the alternative method yields solutions with likelihood values at least as high as, and often higher than, those returned by the standard method. Moreover, analyses of the illustrative data set show that the results obtained by the two methods may differ considerably with regard to some of the substantive conclusions. The results reported in this article indicate that in applications of finite mixture regression, consideration should be given to the type of mechanism chosen to generate random starting values for the EM algorithm. In order to facilitate the use of the proposed alternative method, an R function implementing the approach is provided in the Appendix of the article.

  1. Urban Road Detection in Airbone Laser Scanning Point Cloud Using Random Forest Algorithm

    NASA Astrophysics Data System (ADS)

    Kaczałek, B.; Borkowski, A.

    2016-06-01

    The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS). For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.

  2. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is

  3. MetaPIGA v2.0: maximum likelihood large phylogeny estimation using the metapopulation genetic algorithm and other stochastic heuristics

    PubMed Central

    2010-01-01

    Background The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s) but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Results Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood), including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA) together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. Conclusions The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these algorithms. MetaPIGA v2

  4. Distribution of transition times in a stochastic model of excitable cell: Insights into the cell-intrinsic mechanisms of randomness in neuronal interspike intervals

    NASA Astrophysics Data System (ADS)

    Requena-Carrión, Jesús; Requena-Carrión, Víctor J.

    2016-04-01

    In this paper, we develop an analytical approach to studying random patterns of activity in excitable cells. Our analytical approach uses a two-state stochastic model of excitable system based on the electrophysiological properties of refractoriness and restitution, which characterize cell recovery after excitation. By applying the notion of probability density flux, we derive the distributions of transition times between states and the distribution of interspike interval (ISI) durations for a constant applied stimulus. The derived ISI distribution is unimodal and, provided that the time spent in the excited state is constant, can be approximated by a Rayleigh peak followed by an exponential tail. We then explore the role of the model parameters in determining the shape of the derived distributions and the ISI coefficient of variation. Finally, we use our analytical results to study simulation results from the stochastic Morris-Lecar neuron and from a three-state extension of the proposed stochastic model, which is capable of reproducing multimodal ISI histograms.

  5. An efficient voting algorithm for finding additive biclusters with random background.

    PubMed

    Xiao, Jing; Wang, Lusheng; Liu, Xiaowen; Jiang, Tao

    2008-12-01

    The biclustering problem has been extensively studied in many areas, including e-commerce, data mining, machine learning, pattern recognition, statistics, and, more recently, computational biology. Given an n x m matrix A (n >or= m), the main goal of biclustering is to identify a subset of rows (called objects) and a subset of columns (called properties) such that some objective function that specifies the quality of the found bicluster (formed by the subsets of rows and of columns of A) is optimized. The problem has been proved or conjectured to be NP-hard for various objective functions. In this article, we study a probabilistic model for the implanted additive bicluster problem, where each element in the n x m background matrix is a random integer from [0, L - 1] for some integer L, and a k x k implanted additive bicluster is obtained from an error-free additive bicluster by randomly changing each element to a number in [0, L - 1] with probability theta. We propose an O(n(2)m) time algorithm based on voting to solve the problem. We show that when k >or= Omega(square root of (n log n)), the voting algorithm can correctly find the implanted bicluster with probability at least 1 - (9/n(2)). We also implement our algorithm as a C++ program named VOTE. The implementation incorporates several ideas for estimating the size of an implanted bicluster, adjusting the threshold in voting, dealing with small biclusters, and dealing with overlapping implanted biclusters. Our experimental results on both simulated and real datasets show that VOTE can find biclusters with a high accuracy and speed. PMID:19040364

  6. Precise algorithm to generate random sequential addition of hard hyperspheres at saturation.

    PubMed

    Zhang, G; Torquato, S

    2013-11-01

    The study of the packing of hard hyperspheres in d-dimensional Euclidean space R^{d} has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. E 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g(2)(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed "decorrelation" principle, and the degree of "hyperuniformity" (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the second moment of inertia of the average

  7. Precise algorithm to generate random sequential addition of hard hyperspheres at saturation

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Torquato, S.

    2013-11-01

    The study of the packing of hard hyperspheres in d-dimensional Euclidean space Rd has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.74.061308 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g2(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed “decorrelation” principle, and the degree of “hyperuniformity” (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the

  8. Precise algorithm to generate random sequential addition of hard hyperspheres at saturation.

    PubMed

    Zhang, G; Torquato, S

    2013-11-01

    The study of the packing of hard hyperspheres in d-dimensional Euclidean space R^{d} has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. E 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g(2)(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed "decorrelation" principle, and the degree of "hyperuniformity" (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the second moment of inertia of the average

  9. Statistical physics analysis of the computational complexity of solving random satisfiability problems using backtrack algorithms

    NASA Astrophysics Data System (ADS)

    Cocco, S.; Monasson, R.

    2001-08-01

    The computational complexity of solving random 3-Satisfiability (3-SAT) problems is investigated using statistical physics concepts and techniques related to phase transitions, growth processes and (real-space) renormalization flows. 3-SAT is a representative example of hard computational tasks; it consists in knowing whether a set of αN randomly drawn logical constraints involving N Boolean variables can be satisfied altogether or not. Widely used solving procedures, as the Davis-Putnam-Loveland-Logemann (DPLL) algorithm, perform a systematic search for a solution, through a sequence of trials and errors represented by a search tree. The size of the search tree accounts for the computational complexity, i.e. the amount of computational efforts, required to achieve resolution. In the present study, we identify, using theory and numerical experiments, easy (size of the search tree scaling polynomially with N) and hard (exponential scaling) regimes as a function of the ratio α of constraints per variable. The typical complexity is explicitly calculated in the different regimes, in very good agreement with numerical simulations. Our theoretical approach is based on the analysis of the growth of the branches in the search tree under the operation of DPLL. On each branch, the initial 3-SAT problem is dynamically turned into a more generic 2+p-SAT problem, where p and 1 - p are the fractions of constraints involving three and two variables respectively. The growth of each branch is monitored by the dynamical evolution of α and p and is represented by a trajectory in the static phase diagram of the random 2+p-SAT problem. Depending on whether or not the trajectories cross the boundary between satisfiable and unsatisfiable phases, single branches or full trees are generated by DPLL, resulting in easy or hard resolutions. Our picture for the origin of complexity can be applied to other computational problems solved by branch and bound algorithms.

  10. Combining Spectral and Texture Features Using Random Forest Algorithm: Extracting Impervious Surface Area in Wuhan

    NASA Astrophysics Data System (ADS)

    Shao, Zhenfeng; Zhang, Yuan; Zhang, Lei; Song, Yang; Peng, Minjun

    2016-06-01

    Impervious surface area (ISA) is one of the most important indicators of urban environments. At present, based on multi-resolution remote sensing images, numerous approaches have been proposed to extract impervious surface, using statistical estimation, sub-pixel classification and spectral mixture analysis method of sub-pixel analysis. Through these methods, impervious surfaces can be effectively applied to regional-scale planning and management. However, for the large scale region, high resolution remote sensing images can provide more details, and therefore they will be more conducive to analysis environmental monitoring and urban management. Since the purpose of this study is to map impervious surfaces more effectively, three classification algorithms (random forests, decision trees, and artificial neural networks) were tested for their ability to map impervious surface. Random forests outperformed the decision trees, and artificial neural networks in precision. Combining the spectral indices and texture, random forests is applied to impervious surface extraction with a producer's accuracy of 0.98, a user's accuracy of 0.97, and an overall accuracy of 0.98 and a kappa coefficient of 0.97.

  11. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    SciTech Connect

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erences in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.

  12. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    DOE PAGESBeta

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less

  13. Stochastic reconstruction of sandstones

    PubMed

    Manwart; Torquato; Hilfer

    2000-07-01

    A simulated annealing algorithm is employed to generate a stochastic model for a Berea sandstone and a Fontainebleau sandstone, with each a prescribed two-point probability function, lineal-path function, and "pore size" distribution function, respectively. We find that the temperature decrease of the annealing has to be rather quick to yield isotropic and percolating configurations. A comparison of simple morphological quantities indicates good agreement between the reconstructions and the original sandstones. Also, the mean survival time of a random walker in the pore space is reproduced with good accuracy. However, a more detailed investigation by means of local porosity theory shows that there may be significant differences of the geometrical connectivity between the reconstructed and the experimental samples.

  14. Randomized selection on the GPU

    SciTech Connect

    Monroe, Laura Marie; Wendelberger, Joanne R; Michalak, Sarah E

    2011-01-13

    We implement here a fast and memory-sparing probabilistic top N selection algorithm on the GPU. To our knowledge, this is the first direct selection in the literature for the GPU. The algorithm proceeds via a probabilistic-guess-and-chcck process searching for the Nth element. It always gives a correct result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces the average time required for the algorithm. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be well suited to more general parallel processors with limited amounts of fast memory.

  15. Adaptive stochastic cellular automata: Applications

    NASA Astrophysics Data System (ADS)

    Qian, S.; Lee, Y. C.; Jones, R. D.; Barnes, C. W.; Flake, G. W.; O'Rourke, M. K.; Lee, K.; Chen, H. H.; Sun, G. Z.; Zhang, Y. Q.; Chen, D.; Giles, C. L.

    1990-09-01

    The stochastic learning cellular automata model has been applied to the problem of controlling unstable systems. Two example unstable systems studied are controlled by an adaptive stochastic cellular automata algorithm with an adaptive critic. The reinforcement learning algorithm and the architecture of the stochastic CA controller are presented. Learning to balance a single pole is discussed in detail. Balancing an inverted double pendulum highlights the power of the stochastic CA approach. The stochastic CA model is compared to conventional adaptive control and artificial neural network approaches.

  16. Data Security in Ad Hoc Networks Using Randomization of Cryptographic Algorithms

    NASA Astrophysics Data System (ADS)

    Krishna, B. Ananda; Radha, S.; Keshava Reddy, K. Chenna

    Ad hoc networks are a new wireless networking paradigm for mobile hosts. Unlike traditional mobile wireless networks, ad hoc networks do not rely on any fixed infrastructure. Instead, hosts rely on each other to keep the network connected. The military tactical and other security-sensitive operations are still the main applications of ad hoc networks, although there is a trend to adopt ad hoc networks for commercial uses due to their unique properties. One main challenge in design of these networks is how to feasibly detect and defend the major attacks against data, impersonation and unauthorized data modification. Also, in the same network some nodes may be malicious whose objective is to degrade the network performance. In this study, we propose a security model in which the packets are encrypted and decrypted using multiple algorithms where the selection scheme is random. The performance of the proposed model is analyzed and it is observed that there is no increase in control overhead but a slight delay is introduced due to the encryption process. We conclude that the proposed security model works well for heavily loaded networks with high mobility and can be extended for more cryptographic algorithms.

  17. Development and Evaluation of a New Air Exchange Rate Algorithm for the Stochastic Human Exposure and Dose Simulation Model

    EPA Science Inventory

    between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...

  18. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  19. Harmonics elimination algorithm for operational modal analysis using random decrement technique

    NASA Astrophysics Data System (ADS)

    Modak, S. V.; Rawal, Chetan; Kundra, T. K.

    2010-05-01

    Operational modal analysis (OMA) extracts modal parameters of a structure using their output response, during operation in general. OMA, when applied to mechanical engineering structures is often faced with the problem of harmonics present in the output response, and can cause erroneous modal extraction. This paper demonstrates for the first time that the random decrement (RD) method can be efficiently employed to eliminate the harmonics from the randomdec signatures. Further, the research work shows effective elimination of large amplitude harmonics also by proposing inclusion of additional random excitation. This obviously need not be recorded for analysis, as is the case with any other OMA method. The free decays obtained from RD have been used for system modal identification using eigen-system realization algorithm (ERA). The proposed harmonic elimination method has an advantage over previous methods in that it does not require the harmonic frequencies to be known and can be used for multiple harmonics, including periodic signals. The theory behind harmonic elimination is first developed and validated. The effectiveness of the method is demonstrated through a simulated study and then by experimental studies on a beam and a more complex F-shape structure, which resembles in shape to the skeleton of a drilling or milling machine tool. Cases with presence of single and multiple harmonics in the response are considered.

  20. A Constructive Mean-Field Analysis of Multi-Population Neural Networks with Random Synaptic Weights and Stochastic Inputs

    PubMed Central

    Faugeras, Olivier; Touboul, Jonathan; Cessac, Bruno

    2008-01-01

    We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales. PMID:19255631

  1. PIPS-SBB: A Parallel Distributed-Memory Branch-and-Bound Algorithm for Stochastic Mixed-Integer Programs

    DOE PAGESBeta

    Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak

    2016-05-01

    Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less

  2. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    SciTech Connect

    Soufi, M; Asl, A Kamali; Geramifar, P

    2015-06-15

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and

  3. Study on high order perturbation-based nonlinear stochastic finite element method for dynamic problems

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Yao, Jing-Zheng

    2010-12-01

    Several algorithms were proposed relating to the development of a framework of the perturbation-based stochastic finite element method (PSFEM) for large variation nonlinear dynamic problems. For this purpose, algorithms and a framework related to SFEM based on the stochastic virtual work principle were studied. To prove the validity and practicality of the algorithms and framework, numerical examples for nonlinear dynamic problems with large variations were calculated and compared with the Monte-Carlo Simulation method. This comparison shows that the proposed approaches are accurate and effective for the nonlinear dynamic analysis of structures with random parameters.

  4. Image encryption algorithm based on the random local phase encoding in gyrator transform domains

    NASA Astrophysics Data System (ADS)

    Liu, Zhengjun; Yang, Meng; Liu, Wei; Li, She; Gong, Min; Liu, Wanyu; Liu, Shutian

    2012-09-01

    A random local phase encoding method is presented for encrypting a secret image. Some random polygons are introduced to control the local regions of random phase encoding. The data located in the random polygon is encoded by random phase encoding. The random phase data is the main key in this encryption method. The different random phases calculated by using a monotonous function are employed. The random data defining random polygon serves as an additional key for enhancing the security of the image encryption scheme. Numerical simulations are given for demonstrating the performance of the proposed encryption approach.

  5. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  6. A fast random walk algorithm for computing the pulsed-gradient spin-echo signal in multiscale porous media.

    PubMed

    Grebenkov, Denis S

    2011-02-01

    A new method for computing the signal attenuation due to restricted diffusion in a linear magnetic field gradient is proposed. A fast random walk (FRW) algorithm for simulating random trajectories of diffusing spin-bearing particles is combined with gradient encoding. As random moves of a FRW are continuously adapted to local geometrical length scales, the method is efficient for simulating pulsed-gradient spin-echo experiments in hierarchical or multiscale porous media such as concrete, sandstones, sedimentary rocks and, potentially, brain or lungs. PMID:21159532

  7. A fast random walk algorithm for computing the pulsed-gradient spin-echo signal in multiscale porous media.

    PubMed

    Grebenkov, Denis S

    2011-02-01

    A new method for computing the signal attenuation due to restricted diffusion in a linear magnetic field gradient is proposed. A fast random walk (FRW) algorithm for simulating random trajectories of diffusing spin-bearing particles is combined with gradient encoding. As random moves of a FRW are continuously adapted to local geometrical length scales, the method is efficient for simulating pulsed-gradient spin-echo experiments in hierarchical or multiscale porous media such as concrete, sandstones, sedimentary rocks and, potentially, brain or lungs.

  8. A rigorous framework for multiscale simulation of stochastic cellular networks

    PubMed Central

    Chevalier, Michael W.; El-Samad, Hana

    2009-01-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-cell variability even in clonal populations. Stochastic biochemical networks are modeled as continuous time discrete state Markov processes whose probability density functions evolve according to a chemical master equation (CME). The CME is not solvable but for the simplest cases, and one has to resort to kinetic Monte Carlo techniques to simulate the stochastic trajectories of the biochemical network under study. A commonly used such algorithm is the stochastic simulation algorithm (SSA). Because it tracks every biochemical reaction that occurs in a given system, the SSA presents computational difficulties especially when there is a vast disparity in the timescales of the reactions or in the number of molecules involved in these reactions. This is common in cellular networks, and many approximation algorithms have evolved to alleviate the computational burdens of the SSA. Here, we present a rigorously derived modified CME framework based on the partition of a biochemically reacting system into restricted and unrestricted reactions. Although this modified CME decomposition is as analytically difficult as the original CME, it can be naturally used to generate a hierarchy of approximations at different levels of accuracy. Most importantly, some previously derived algorithms are demonstrated to be limiting cases of our formulation. We apply our methods to biologically relevant test systems to demonstrate their accuracy and efficiency. PMID:19673546

  9. Pre-Hospital Triage of Trauma Patients Using the Random Forest Computer Algorithm

    PubMed Central

    Scerbo, Michelle; Radhakrishnan, Hari; Cotton, Bryan; Dua, Anahita; Del Junco, Deborah; Wade, Charles; Holcomb, John B.

    2015-01-01

    Background Over-triage not only wastes resources but displaces the patient from their community and causes delay of treatment for the more seriously injured. This study aimed to validate the Random Forest computer model (RFM) as means of better triaging trauma patients to Level I trauma centers. Methods Adult trauma patients with “medium activation” presenting via helicopter to a Level I Trauma Center from May 2007 to May 2009 were included. The “medium activation” trauma patient is alert and hemodynamically stable on scene but has either subnormal vital signs or an accumulation of risk factors that may indicate a potentially serious injury. Variables included in the RFM computer analysis including demographics, mechanism of injury, pre-hospital fluid, medications, vitals, and disposition. Statistical analysis was performed via the Random Forest Algorithm to compare our institutional triage rate to rates determined by the RFM. Results A total of 1,653 patients were included in this study of which 496 were used in the testing set of the RFM. In our testing set, 33.8% of patients brought to our Level I trauma center could have been managed at a Level III trauma center and 88% of patients that required a Level I trauma center were identified correctly. In the testing set, there was an over-triage rate of 66% while utilizing the RFM we decreased the over-triage rate to 42% (p<0.001). There was an under-triage rate of 8.3%. The RFM predicted patient disposition with a sensitivity of 89%, specificity of 42%, negative predictive value of 92% and positive predictive value of 34%. Conclusion While prospective validation is required, it appears that computer modeling potentially could be used to guide triage decisions, allowing both more accurate triage and more efficient use of the trauma system. PMID:24484906

  10. Automatic classification of endogenous seismic sources within a landslide body using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Provost, Floriane; Hibert, Clément; Malet, Jean-Philippe; Stumpf, André; Doubre, Cécile

    2016-04-01

    Different studies have shown the presence of microseismic activity in soft-rock landslides. The seismic signals exhibit significantly different features in the time and frequency domains which allow their classification and interpretation. Most of the classes could be associated with different mechanisms of deformation occurring within and at the surface (e.g. rockfall, slide-quake, fissure opening, fluid circulation). However, some signals remain not fully understood and some classes contain few examples that prevent any interpretation. To move toward a more complete interpretation of the links between the dynamics of soft-rock landslides and the physical processes controlling their behaviour, a complete catalog of the endogeneous seismicity is needed. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forest classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied to the seismicity recorded at the Super-Sauze landslide between 2013 and 2015. We selected a dozen of seismic signal features that characterize precisely its spectral content (e.g. central frequency, spectrum width, energy in several frequency bands, spectrogram shape, spectrum local and global maxima

  11. Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2014-09-22

    We present a novel image hiding method based on phase retrieval algorithm under the framework of nonlinear double random phase encoding in fractional Fourier domain. Two phase-only masks (POMs) are efficiently determined by using the phase retrieval algorithm, in which two cascaded phase-truncated fractional Fourier transforms (FrFTs) are involved. No undesired information disclosure, post-processing of the POMs or digital inverse computation appears in our proposed method. In order to achieve the reduction in key transmission, a modified image hiding method based on the modified phase retrieval algorithm and logistic map is further proposed in this paper, in which the fractional orders and the parameters with respect to the logistic map are regarded as encryption keys. Numerical results have demonstrated the feasibility and effectiveness of the proposed algorithms.

  12. INSTRUCTIONAL CONFERENCE ON THE THEORY OF STOCHASTIC PROCESSES: On the general theory of random fields on the plane

    NASA Astrophysics Data System (ADS)

    Gushchin, A. A.

    1982-12-01

    CONTENTSIntroduction § 1. Basic notation and definitions § 2. The Doléans measure and increasing fields § 3. Theorems on predictable projections. Decomposition of weak submartingales § 4. Weakly predictable random fields § 5. Theorems on weakly predictable projections § 6. Decomposition of strong martingales References

  13. An automatic water body area monitoring algorithm for satellite images based on Markov Random Fields

    NASA Astrophysics Data System (ADS)

    Elmi, Omid; Tourian, Mohammad J.; Sneeuw, Nico

    2016-04-01

    Our knowledge about spatial and temporal variation of hydrological parameters are surprisingly poor, because most of it is based on in situ stations and the number of stations have reduced dramatically during the past decades. On the other hand, remote sensing techniques have proven their ability to measure different parameters of Earth phenomena. Optical and SAR satellite imagery provide the opportunity to monitor the spatial change in coastline, which can serve as a way to determine the water extent repeatedly in an appropriate time interval. An appropriate classification technique to separate water and land is the backbone of each automatic water body monitoring. Due to changes in the water level, river and lake extent, atmosphere, sunlight radiation and onboard calibration of the satellite over time, most of the pixel-based classification techniques fail to determine accurate water masks. Beyond pixel intensity, spatial correlation between neighboring pixels is another source of information that should be used to decide the label of pixels. Water bodies have strong spatial correlation in satellite images. Therefore including contextual information as additional constraint into the procedure of water body monitoring improves the accuracy of the derived water masks significantly. In this study, we present an automatic algorithm for water body area monitoring based on maximum a posteriori (MAP) estimation of Markov Random Fields (MRF). First we collect all available images from selected case studies during the monitoring period. Then for each image separately we apply a k-means clustering to derive a primary water mask. After that we develop a MRF using pixel values and the primary water mask for each image. Then among the different realizations of the field we select the one that maximizes the posterior estimation. We solve this optimization problem using graph cut techniques. A graph with two terminals is constructed, after which the best labelling structure for

  14. High-copy bacterial plasmids diffuse in the nucleoid-free space, replicate stochastically and are randomly partitioned at cell division.

    PubMed

    Reyes-Lamothe, Rodrigo; Tran, Tung; Meas, Diane; Lee, Laura; Li, Alice M; Sherratt, David J; Tolmasky, Marcelo E

    2014-01-01

    Bacterial plasmids play important roles in the metabolism, pathogenesis and bacterial evolution and are highly versatile biotechnological tools. Stable inheritance of plasmids depends on their autonomous replication and efficient partition to daughter cells at cell division. Active partition systems have not been identified for high-copy number plasmids, and it has been generally believed that they are partitioned randomly at cell division. Nevertheless, direct evidence for the cellular location of replicating and nonreplicating plasmids, and the partition mechanism has been lacking. We used as model pJHCMW1, a plasmid isolated from Klebsiella pneumoniae that includes two β-lactamase and two aminoglycoside resistance genes. Here we report that individual ColE1-type plasmid molecules are mobile and tend to be excluded from the nucleoid, mainly localizing at the cell poles but occasionally moving between poles along the long axis of the cell. As a consequence, at the moment of cell division, most plasmid molecules are located at the poles, resulting in efficient random partition to the daughter cells. Complete replication of individual molecules occurred stochastically and independently in the nucleoid-free space throughout the cell cycle, with a constant probability of initiation per plasmid.

  15. Parallel high-order methods for deterministic and stochastic CFD and MHD problems

    NASA Astrophysics Data System (ADS)

    Lin, Guang

    In computational fluid dynamics (CFD) and magneto-hydro-dynamics (MHD) applications there exist many sources of uncertainty, arising from imprecise material properties, random geometric roughness, noise in boundary/initial condition, transport coefficients, or external forcing. In this dissertation, stochastic perturbation analysis and stochastic simulations based on multi-element generalized polynomial chaos (ME-gPC) are employed synergistically, to solve large-scale stochastic CFD and MHD problems with many random inputs. Stochastic analytical solutions are obtained to serve in verifying the accuracy of the numerical results for small random inputs, but also in shedding light into the physical mechanisms and scaling laws associated with the structural changes of flow field due to random inputs. First, the Karhuen-Loeve (K-L) decomposition is presented; it is an efficient technique for modeling the random inputs. How to represent the covariance kernel for different boundary constrains is an important issue. A new covariance matrix for an one-dimensional fourth-order random process with four boundary constraints is derived analytically, and it is used to model random rough wedge surfaces subjected to supersonic flow. The algorithm of ME-gPC is presented next. ME-gPC is based on the decomposition of random space and spectral expansions. To efficiently solve complex stochastic fluid dynamical systems, e.g., stochastic compressible flows, the ME-gPC method is extended to multi-element probabilistic collocation method on sparse grids (ME-PCM) by coupling it with the probabilistic collocation projection. By using the sparse grid points, ME-PCM can handle random process with large number of random dimensions, with relative lower computational cost, compared to full tensor products. Several prototype problems in compressible and MHD flows are investigated by employing the aforementioned high-order stochastic numerical methods in conjunction with the stochastic

  16. Stochastic models of solute transport in highly heterogeneous geologic media

    SciTech Connect

    Semenov, V.N.; Korotkin, I.A.; Pruess, K.; Goloviznin, V.M.; Sorokovikova, O.S.

    2009-09-15

    A stochastic model of anomalous diffusion was developed in which transport occurs by random motion of Brownian particles, described by distribution functions of random displacements with heavy (power-law) tails. One variant of an effective algorithm for random function generation with a power-law asymptotic and arbitrary factor of asymmetry is proposed that is based on the Gnedenko-Levy limit theorem and makes it possible to reproduce all known Levy {alpha}-stable fractal processes. A two-dimensional stochastic random walk algorithm has been developed that approximates anomalous diffusion with streamline-dependent and space-dependent parameters. The motivation for introducing such a type of dispersion model is the observed fact that tracers in natural aquifers spread at different super-Fickian rates in different directions. For this and other important cases, stochastic random walk models are the only known way to solve the so-called multiscaling fractional order diffusion equation with space-dependent parameters. Some comparisons of model results and field experiments are presented.

  17. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift.

    PubMed

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  18. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    PubMed Central

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  19. Predictability and reduced order modeling in stochastic reaction networks.

    SciTech Connect

    Najm, Habib N.; Debusschere, Bert J.; Sargsyan, Khachik

    2008-10-01

    Many systems involving chemical reactions between small numbers of molecules exhibit inherent stochastic variability. Such stochastic reaction networks are at the heart of processes such as gene transcription, cell signaling or surface catalytic reactions, which are critical to bioenergy, biomedical, and electrical storage applications. The underlying molecular reactions are commonly modeled with chemical master equations (CMEs), representing jump Markov processes, or stochastic differential equations (SDEs), rather than ordinary differential equations (ODEs). As such reaction networks are often inferred from noisy experimental data, it is not uncommon to encounter large parametric uncertainties in these systems. Further, a wide range of time scales introduces the need for reduced order representations. Despite the availability of mature tools for uncertainty/sensitivity analysis and reduced order modeling in deterministic systems, there is a lack of robust algorithms for such analyses in stochastic systems. In this talk, we present advances in algorithms for predictability and reduced order representations for stochastic reaction networks and apply them to bistable systems of biochemical interest. To study the predictability of a stochastic reaction network in the presence of both parametric uncertainty and intrinsic variability, an algorithm was developed to represent the system state with a spectral polynomial chaos (PC) expansion in the stochastic space representing parametric uncertainty and intrinsic variability. Rather than relying on a non-intrusive collocation-based Galerkin projection [1], this PC expansion is obtained using Bayesian inference, which is ideally suited to handle noisy systems through its probabilistic formulation. To accommodate state variables with multimodal distributions, an adaptive multiresolution representation is used [2]. As the PC expansion directly relates the state variables to the uncertain parameters, the formulation lends

  20. Model parameter adaption-based multi-model algorithm for extended object tracking using a random matrix.

    PubMed

    Li, Borui; Mu, Chundi; Han, Shuli; Bai, Tianming

    2014-01-01

    Traditional object tracking technology usually regards the target as a point source object. However, this approximation is no longer appropriate for tracking extended objects such as large targets and closely spaced group objects. Bayesian extended object tracking (EOT) using a random symmetrical positive definite (SPD) matrix is a very effective method to jointly estimate the kinematic state and physical extension of the target. The key issue in the application of this random matrix-based EOT approach is to model the physical extension and measurement noise accurately. Model parameter adaptive approaches for both extension dynamic and measurement noise are proposed in this study based on the properties of the SPD matrix to improve the performance of extension estimation. An interacting multi-model algorithm based on model parameter adaptive filter using random matrix is also presented. Simulation results demonstrate the effectiveness of the proposed adaptive approaches and multi-model algorithm. The estimation performance of physical extension is better than the other algorithms, especially when the target maneuvers. The kinematic state estimation error is lower than the others as well. PMID:24763252

  1. Model parameter adaption-based multi-model algorithm for extended object tracking using a random matrix.

    PubMed

    Li, Borui; Mu, Chundi; Han, Shuli; Bai, Tianming

    2014-04-24

    Traditional object tracking technology usually regards the target as a point source object. However, this approximation is no longer appropriate for tracking extended objects such as large targets and closely spaced group objects. Bayesian extended object tracking (EOT) using a random symmetrical positive definite (SPD) matrix is a very effective method to jointly estimate the kinematic state and physical extension of the target. The key issue in the application of this random matrix-based EOT approach is to model the physical extension and measurement noise accurately. Model parameter adaptive approaches for both extension dynamic and measurement noise are proposed in this study based on the properties of the SPD matrix to improve the performance of extension estimation. An interacting multi-model algorithm based on model parameter adaptive filter using random matrix is also presented. Simulation results demonstrate the effectiveness of the proposed adaptive approaches and multi-model algorithm. The estimation performance of physical extension is better than the other algorithms, especially when the target maneuvers. The kinematic state estimation error is lower than the others as well.

  2. Algorithms for propagating uncertainty across heterogeneous domains

    SciTech Connect

    Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.

    2015-12-30

    We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.

  3. Application of the stochastic resonance algorithm to the simultaneous quantitative determination of multiple weak peaks of ultra-performance liquid chromatography coupled to time-of-flight mass spectrometry.

    PubMed

    Deng, Haishan; Shang, Erxin; Xiang, Bingren; Xie, Shaofei; Tang, Yuping; Duan, Jin-ao; Zhan, Ying; Chi, Yumei; Tan, Defei

    2011-03-15

    The stochastic resonance algorithm (SRA) has been developed as a potential tool for amplifying and determining weak chromatographic peaks in recent years. However, the conventional SRA cannot be applied directly to ultra-performance liquid chromatography/time-of-flight mass spectrometry (UPLC/TOFMS). The obstacle lies in the fact that the narrow peaks generated by UPLC contain high-frequency components which fall beyond the restrictions of the theory of stochastic resonance. Although there already exists an algorithm that allows a high-frequency weak signal to be detected, the sampling frequency of TOFMS is not fast enough to meet the requirement of the algorithm. Another problem is the depression of the weak peak of the compound with low concentration or weak detection response, which prevents the simultaneous determination of multi-component UPLC/TOFMS peaks. In order to lower the frequencies of the peaks, an interpolation and re-scaling frequency stochastic resonance (IRSR) is proposed, which re-scales the peak frequencies via linear interpolating sample points numerically. The re-scaled UPLC/TOFMS peaks could then be amplified significantly. By introducing an external energy field upon the UPLC/TOFMS signals, the method of energy gain was developed to simultaneously amplify and determine weak peaks from multi-components. Subsequently, a multi-component stochastic resonance algorithm was constructed for the simultaneous quantitative determination of multiple weak UPLC/TOFMS peaks based on the two methods. The optimization of parameters was discussed in detail with simulated data sets, and the applicability of the algorithm was evaluated by quantitative analysis of three alkaloids in human plasma using UPLC/TOFMS. The new algorithm behaved well in the improvement of signal-to-noise (S/N) compared to several normally used peak enhancement methods, including the Savitzky-Golay filter, Whittaker-Eilers smoother and matched filtration.

  4. Probabilistic structural analysis algorithm development for computational efficiency

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1991-01-01

    The PSAM (Probabilistic Structural Analysis Methods) program is developing a probabilistic structural risk assessment capability for the SSME components. An advanced probabilistic structural analysis software system, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), is being developed as part of the PSAM effort to accurately simulate stochastic structures operating under severe random loading conditions. One of the challenges in developing the NESSUS system is the development of the probabilistic algorithms that provide both efficiency and accuracy. The main probability algorithms developed and implemented in the NESSUS system are efficient, but approximate in nature. In the last six years, the algorithms have improved very significantly.

  5. Angular spectral plane-wave expansion of nonstationary random fields in stochastic mode-stirred reverberation processes.

    PubMed

    Arnaut, Luk R

    2010-04-01

    We derive an integral expression for the plane-wave expansion of the time-varying (nonstationary) random field inside a mode-stirred reverberation chamber. It is shown that this expansion is a so-called oscillatory process, whose kernel can be expressed explicitly in closed form. The effect of nonstationarity is a modulation of the spectral density of the field on a time scale that is a function of the cavity relaxation time. It is also shown how the contribution by a nonzero initial value of the field can be incorporated into the expansion. The results are extended to a special class of second-order processes, relevant to the reception of a mode-stirred reverberation field by a device under test with a first-order (relaxation-type) frequency response.

  6. Angular spectral plane-wave expansion of nonstationary random fields in stochastic mode-stirred reverberation processes

    NASA Astrophysics Data System (ADS)

    Arnaut, Luk R.

    2010-04-01

    We derive an integral expression for the plane-wave expansion of the time-varying (nonstationary) random field inside a mode-stirred reverberation chamber. It is shown that this expansion is a so-called oscillatory process, whose kernel can be expressed explicitly in closed form. The effect of nonstationarity is a modulation of the spectral density of the field on a time scale that is a function of the cavity relaxation time. It is also shown how the contribution by a nonzero initial value of the field can be incorporated into the expansion. The results are extended to a special class of second-order processes, relevant to the reception of a mode-stirred reverberation field by a device under test with a first-order (relaxation-type) frequency response.

  7. On implementation of EM-type algorithms in the stochastic models for a matrix computing on GPU

    SciTech Connect

    Gorshenin, Andrey K.

    2015-03-10

    The paper discusses the main ideas of an implementation of EM-type algorithms for computing on the graphics processors and the application for the probabilistic models based on the Cox processes. An example of the GPU’s adapted MATLAB source code for the finite normal mixtures with the expectation-maximization matrix formulas is given. The testing of computational efficiency for GPU vs CPU is illustrated for the different sample sizes.

  8. An algorithm to detect chimeric clones and random noise in genomic mapping

    SciTech Connect

    Grigoriev, A.; Mott, R.; Lehrach, H.

    1994-07-15

    Experimental noise and contiguous clone inserts can pose serious problems in reconstructing genomic maps from hybridization data. The authors describe an algorithm that easily identifies false positive signals and clones containing chimeric inserts/internal deletions. The algorithm {open_quotes}dechimerizes{close_quotes} clones, splitting them into independent contiguous components and cleaning the initial library into a more consistent data set for further ordering. The effectiveness of the algorithm is demonstrated on both simulated data and the real YAC map of the whole genome genome of the fission yeast Schizosaccharomyces pombe. 8 refs., 3 figs., 1 tab.

  9. A randomized controlled trial of a diagnostic algorithm for symptoms of uncomplicated cystitis at an out-of-hours service

    PubMed Central

    Grude, Nils; Lindbaek, Morten

    2015-01-01

    Objective. To compare the clinical outcome of patients presenting with symptoms of uncomplicated cystitis who were seen by a doctor, with patients who were given treatment following a diagnostic algorithm. Design. Randomized controlled trial. Setting. Out-of-hours service, Oslo, Norway. Intervention. Women with typical symptoms of uncomplicated cystitis were included in the trial in the time period September 2010–November 2011. They were randomized into two groups. One group received standard treatment according to the diagnostic algorithm, the other group received treatment after a regular consultation by a doctor. Subjects. Women (n = 441) aged 16–55 years. Mean age in both groups 27 years. Main outcome measures. Number of days until symptomatic resolution. Results. No significant differences were found between the groups in the basic patient demographics, severity of symptoms, or percentage of urine samples with single culture growth. A median of three days until symptomatic resolution was found in both groups. By day four 79% in the algorithm group and 72% in the regular consultation group were free of symptoms (p = 0.09). The number of patients who contacted a doctor again in the follow-up period and received alternative antibiotic treatment was insignificantly higher (p = 0.08) after regular consultation than after treatment according to the diagnostic algorithm. There were no cases of severe pyelonephritis or hospital admissions during the follow-up period. Conclusion. Using a diagnostic algorithm is a safe and efficient method for treating women with symptoms of uncomplicated cystitis at an out-of-hours service. This simplification of treatment strategy can lead to a more rational use of consultation time and a stricter adherence to National Antibiotic Guidelines for a common disorder. PMID:25961367

  10. Random sampler M-estimator algorithm with sequential probability ratio test for robust function approximation via feed-forward neural networks.

    PubMed

    El-Melegy, Moumen T

    2013-07-01

    This paper addresses the problem of fitting a functional model to data corrupted with outliers using a multilayered feed-forward neural network. Although it is of high importance in practical applications, this problem has not received careful attention from the neural network research community. One recent approach to solving this problem is to use a neural network training algorithm based on the random sample consensus (RANSAC) framework. This paper proposes a new algorithm that offers two enhancements over the original RANSAC algorithm. The first one improves the algorithm accuracy and robustness by employing an M-estimator cost function to decide on the best estimated model from the randomly selected samples. The other one improves the time performance of the algorithm by utilizing a statistical pretest based on Wald's sequential probability ratio test. The proposed algorithm is successfully evaluated on synthetic and real data, contaminated with varying degrees of outliers, and compared with existing neural network training algorithms.

  11. Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2011-09-01

    The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.

  12. Algorithm to solve a chance-constrained network capacity design problem with stochastic demands and finite support

    DOE PAGESBeta

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.; Castaing, Jeremy

    2016-04-15

    Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less

  13. Evaluation of Laser Based Alignment Algorithms Under Additive Random and Diffraction Noise

    SciTech Connect

    McClay, W A; Awwal, A; Wilhelmsen, K; Ferguson, W; McGee, M; Miller, M

    2004-09-30

    The purpose of the automatic alignment algorithm at the National Ignition Facility (NIF) is to determine the position of a laser beam based on the position of beam features from video images. The position information obtained is used to command motors and attenuators to adjust the beam lines to the desired position, which facilitates the alignment of all 192 beams. One of the goals of the algorithm development effort is to ascertain the performance, reliability, and uncertainty of the position measurement. This paper describes a method of evaluating the performance of algorithms using Monte Carlo simulation. In particular we show the application of this technique to the LM1{_}LM3 algorithm, which determines the position of a series of two beam light sources. The performance of the algorithm was evaluated for an ensemble of over 900 simulated images with varying image intensities and noise counts, as well as varying diffraction noise amplitude and frequency. The performance of the algorithm on the image data set had a tolerance well beneath the 0.5-pixel system requirement.

  14. Heuristic-biased stochastic sampling

    SciTech Connect

    Bresina, J.L.

    1996-12-31

    This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show that with the proper bias function, it can be easy to outperform greedy search.

  15. Using a stochastic gradient boosting algorithm to analyse the effectiveness of Landsat 8 data for montado land cover mapping: Application in southern Portugal

    NASA Astrophysics Data System (ADS)

    Godinho, Sérgio; Guiomar, Nuno; Gil, Artur

    2016-07-01

    This study aims to develop and propose a methodological approach for montado ecosystem mapping using Landsat 8 multi-spectral data, vegetation indices, and the Stochastic Gradient Boosting (SGB) algorithm. Two Landsat 8 scenes (images from spring and summer 2014) of the same area in southern Portugal were acquired. Six vegetation indices were calculated for each scene: the Enhanced Vegetation Index (EVI), the Short-Wave Infrared Ratio (SWIR32), the Carotenoid Reflectance Index 1 (CRI1), the Green Chlorophyll Index (CIgreen), the Normalised Multi-band Drought Index (NMDI), and the Soil-Adjusted Total Vegetation Index (SATVI). Based on this information, two datasets were prepared: (i) Dataset I only included multi-temporal Landsat 8 spectral bands (LS8), and (ii) Dataset II included the same information as Dataset I plus vegetation indices (LS8 + VIs). The integration of the vegetation indices into the classification scheme resulted in a significant improvement in the accuracy of Dataset II's classifications when compared to Dataset I (McNemar test: Z-value = 4.50), leading to a difference of 4.90% in overall accuracy and 0.06 in the Kappa value. For the montado ecosystem, adding vegetation indices in the classification process showed a relevant increment in producer and user accuracies of 3.64% and 6.26%, respectively. By using the variable importance function from the SGB algorithm, it was found that the six most prominent variables (from a total of 24 tested variables) were the following: EVI_summer; CRI1_spring; SWIR32_spring; B6_summer; B5_summer; and CIgreen_summer.

  16. On the efficiency of a randomized mirror descent algorithm in online optimization problems

    NASA Astrophysics Data System (ADS)

    Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.

    2015-04-01

    A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.

  17. Multi-Objective Random Search Algorithm for Simultaneously Optimizing Wind Farm Layout and Number of Turbines

    NASA Astrophysics Data System (ADS)

    Feng, Ju; Shen, Wen Zhong; Xu, Chang

    2016-09-01

    A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximize the total power production, which is calculated by considering the wake effects using the Jensen wake model combined with the local wind distribution. The other is to minimize the total electrical cable length. This length is assumed to be the total length of the minimal spanning tree that connects all turbines and is calculated by using Prim's algorithm. Constraints on wind farm boundary and wind turbine proximity are also considered. An ideal test case shows the proposed algorithm largely outperforms a famous multi-objective genetic algorithm (NSGA-II). In the real test case based on the Horn Rev 1 wind farm, the algorithm also obtains useful Pareto frontiers and provides a wide range of Pareto optimal layouts with different numbers of turbines for a real-life wind farm developer.

  18. Segmentation of heterogeneous or small FDG PET positive tissue based on a 3D-locally adaptive random walk algorithm.

    PubMed

    Onoma, D P; Ruan, S; Thureau, S; Nkhali, L; Modzelewski, R; Monnehan, G A; Vera, P; Gardin, I

    2014-12-01

    A segmentation algorithm based on the random walk (RW) method, called 3D-LARW, has been developed to delineate small tumors or tumors with a heterogeneous distribution of FDG on PET images. Based on the original algorithm of RW [1], we propose an improved approach using new parameters depending on the Euclidean distance between two adjacent voxels instead of a fixed one and integrating probability densities of labels into the system of linear equations used in the RW. These improvements were evaluated and compared with the original RW method, a thresholding with a fixed value (40% of the maximum in the lesion), an adaptive thresholding algorithm on uniform spheres filled with FDG and FLAB method, on simulated heterogeneous spheres and on clinical data (14 patients). On these three different data, 3D-LARW has shown better segmentation results than the original RW algorithm and the three other methods. As expected, these improvements are more pronounced for the segmentation of small or tumors having heterogeneous FDG uptake.

  19. GUESS-ing Polygenic Associations with Multiple Phenotypes Using a GPU-Based Evolutionary Stochastic Search Algorithm

    PubMed Central

    Hastie, David I.; Zeller, Tanja; Liquet, Benoit; Newcombe, Paul; Yengo, Loic; Wild, Philipp S.; Schillert, Arne; Ziegler, Andreas; Nielsen, Sune F.; Butterworth, Adam S.; Ho, Weang Kee; Castagné, Raphaële; Munzel, Thomas; Tregouet, David; Falchi, Mario; Cambien, François; Nordestgaard, Børge G.; Fumeron, Fredéric; Tybjærg-Hansen, Anne; Froguel, Philippe; Danesh, John; Petretto, Enrico; Blankenberg, Stefan; Tiret, Laurence; Richardson, Sylvia

    2013-01-01

    Genome-wide association studies (GWAS) yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phenotype GWAS where detection and interpretability of complex SNP(s)-trait(s) associations are complicated by complex Linkage Disequilibrium patterns between SNPs and correlation between traits. Here we propose a computationally efficient algorithm (GUESS) to explore complex genetic-association models and maximize genetic variant detection. We integrated our algorithm with a new Bayesian strategy for multi-phenotype analysis to identify the specific contribution of each SNP to different trait combinations and study genetic regulation of lipid metabolism in the Gutenberg Health Study (GHS). Despite the relatively small size of GHS (n = 3,175), when compared with the largest published meta-GWAS (n>100,000), GUESS recovered most of the major associations and was better at refining multi-trait associations than alternative methods. Amongst the new findings provided by GUESS, we revealed a strong association of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi-phenotype approaches, both Bayesian and non-Bayesian, in a simulation study that mimics real-case scenarios. We showed that our parallel implementation based on Graphics Processing Units outperforms alternative multi-phenotype methods. Beyond multivariate modelling of multi-phenotypes, our Bayesian model employs a flexible hierarchical prior structure for genetic effects that adapts to any correlation structure of the predictors and increases the power to identify associated variants. This

  20. Mass weighted urn design--A new randomization algorithm for unequal allocations.

    PubMed

    Zhao, Wenle

    2015-07-01

    Unequal allocations have been used in clinical trials motivated by ethical, efficiency, or feasibility concerns. Commonly used permuted block randomization faces a tradeoff between effective imbalance control with a small block size and accurate allocation target with a large block size. Few other unequal allocation randomization designs have been proposed in literature with applications in real trials hardly ever been reported, partly due to their complexity in implementation compared to the permuted block randomization. Proposed in this paper is the mass weighted urn design, in which the number of balls in the urn equals to the number of treatments, and remains unchanged during the study. The chance a ball being randomly selected is proportional to the mass of the ball. After each treatment assignment, a part of the mass of the selected ball is re-distributed to all balls based on the target allocation ratio. This design allows any desired optimal unequal allocations be accurately targeted without approximation, and provides a consistent imbalance control throughout the allocation sequence. The statistical properties of this new design is evaluated with the Euclidean distance between the observed treatment distribution and the desired treatment distribution as the treatment imbalance measure; and the Euclidean distance between the conditional allocation probability and the target allocation probability as the allocation predictability measure. Computer simulation results are presented comparing the mass weighted urn design with other randomization designs currently available for unequal allocations.

  1. Development of Solution Algorithm and Sensitivity Analysis for Random Fuzzy Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Katagiri, Hideki

    2010-10-01

    This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.

  2. [Segmentation of Winter Wheat Canopy Image Based on Visual Spectral and Random Forest Algorithm].

    PubMed

    Liu, Ya-dong; Cui, Ri-xian

    2015-12-01

    Digital image analysis has been widely used in non-destructive monitoring of crop growth and nitrogen nutrition status due to its simplicity and efficiency. It is necessary to segment winter wheat plant from soil background for accessing canopy cover, intensity level of visible spectrum (R, G, and B) and other color indices derived from RGB. In present study, according to the variation in R, G, and B components of sRGB color space and L*, a*, and b* components of CIEL* a* b* color space between wheat plant and soil background, the segmentation of wheat plant from soil background were conducted by the Otsu's method based on a* component of CIEL* a* b* color space, and RGB based random forest method, and CIEL* a* b* based random forest method, respectively. Also the ability to segment wheat plant from soil background was evaluated with the value of segmentation accuracy. The results showed that all three methods had revealed good ability to segment wheat plant from soil background. The Otsu's method had lowest segmentation accuracy in comparison with the other two methods. There were only little difference in segmentation error between the two random forest methods. In conclusion, the random forest method had revealed its capacity to segment wheat plant from soil background with only the visual spectral information of canopy image without any color components combinations or any color space transformation.

  3. [Segmentation of Winter Wheat Canopy Image Based on Visual Spectral and Random Forest Algorithm].

    PubMed

    Liu, Ya-dong; Cui, Ri-xian

    2015-12-01

    Digital image analysis has been widely used in non-destructive monitoring of crop growth and nitrogen nutrition status due to its simplicity and efficiency. It is necessary to segment winter wheat plant from soil background for accessing canopy cover, intensity level of visible spectrum (R, G, and B) and other color indices derived from RGB. In present study, according to the variation in R, G, and B components of sRGB color space and L*, a*, and b* components of CIEL* a* b* color space between wheat plant and soil background, the segmentation of wheat plant from soil background were conducted by the Otsu's method based on a* component of CIEL* a* b* color space, and RGB based random forest method, and CIEL* a* b* based random forest method, respectively. Also the ability to segment wheat plant from soil background was evaluated with the value of segmentation accuracy. The results showed that all three methods had revealed good ability to segment wheat plant from soil background. The Otsu's method had lowest segmentation accuracy in comparison with the other two methods. There were only little difference in segmentation error between the two random forest methods. In conclusion, the random forest method had revealed its capacity to segment wheat plant from soil background with only the visual spectral information of canopy image without any color components combinations or any color space transformation. PMID:26964234

  4. Ant colony optimization and stochastic gradient descent.

    PubMed

    Meuleau, Nicolas; Dorigo, Marco

    2002-01-01

    In this article, we study the relationship between the two techniques known as ant colony optimization (ACO) and stochastic gradient descent. More precisely, we show that some empirical ACO algorithms approximate stochastic gradient descent in the space of pheromones, and we propose an implementation of stochastic gradient descent that belongs to the family of ACO algorithms. We then use this insight to explore the mutual contributions of the two techniques. PMID:12171633

  5. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lisha

    We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

  6. Finite-Size Scaling in Random K-SAT Problems

    NASA Astrophysics Data System (ADS)

    Ha, Meesoon; Lee, Sang Hoon; Jeon, Chanil; Jeong, Hawoong

    2010-03-01

    We propose a comprehensive view of threshold behaviors in random K-satisfiability (K-SAT) problems, in the context of the finite-size scaling (FSS) concept of nonequilibrium absorbing phase transitions using the average SAT (ASAT) algorithm. In particular, we focus on the value of the FSS exponent to characterize the SAT/UNSAT phase transition, which is still debatable. We also discuss the role of the noise (temperature-like) parameter in stochastic local heuristic search algorithms.

  7. Low Scaling Algorithms for the Random Phase Approximation: Imaginary Time and Laplace Transformations.

    PubMed

    Kaltak, Merzuk; Klimeš, Jiří; Kresse, Georg

    2014-06-10

    In this paper, we determine efficient imaginary frequency and imaginary time grids for second-order Møller-Plesset (MP) perturbation theory. The least-squares and Minimax quadratures are compared for periodic systems, finding that the Minimax quadrature performs slightly better for the considered materials. We show that the imaginary frequency grids developed for second order also perform well for the correlation energy in the direct random phase approximation. Furthermore, we show that the polarizabilities on the imaginary time axis can be Fourier-transformed to the imaginary frequency domain, since the time and frequency Minimax grids are dual to each other. The same duality is observed for the least-squares grids. The transformation from imaginary time to imaginary frequency allows one to reduce the time complexity to cubic (in system size), so that random phase approximation (RPA) correlation energies become accessible for large systems.

  8. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  9. Improved scaling of time-evolving block-decimation algorithm through reduced-rank randomized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Tamascelli, D.; Rosenbach, R.; Plenio, M. B.

    2015-06-01

    When the amount of entanglement in a quantum system is limited, the relevant dynamics of the system is restricted to a very small part of the state space. When restricted to this subspace the description of the system becomes efficient in the system size. A class of algorithms, exemplified by the time-evolving block-decimation (TEBD) algorithm, make use of this observation by selecting the relevant subspace through a decimation technique relying on the singular value decomposition (SVD). In these algorithms, the complexity of each time-evolution step is dominated by the SVD. Here we show that, by applying a randomized version of the SVD routine (RRSVD), the power law governing the computational complexity of TEBD is lowered by one degree, resulting in a considerable speed-up. We exemplify the potential gains in efficiency at the hand of some real world examples to which TEBD can be successfully applied and demonstrate that for those systems RRSVD delivers results as accurate as state-of-the-art deterministic SVD routines.

  10. The adaptive dynamic community detection algorithm based on the non-homogeneous random walking

    NASA Astrophysics Data System (ADS)

    Xin, Yu; Xie, Zhi-Qiang; Yang, Jing

    2016-05-01

    With the changing of the habit and custom, people's social activity tends to be changeable. It is required to have a community evolution analyzing method to mine the dynamic information in social network. For that, we design the random walking possibility function and the topology gain function to calculate the global influence matrix of the nodes. By the analysis of the global influence matrix, the clustering directions of the nodes can be obtained, thus the NRW (Non-Homogeneous Random Walk) method for detecting the static overlapping communities can be established. We design the ANRW (Adaptive Non-Homogeneous Random Walk) method via adapting the nodes impacted by the dynamic events based on the NRW. The ANRW combines the local community detection with dynamic adaptive adjustment to decrease the computational cost for ANRW. Furthermore, the ANRW treats the node as the calculating unity, thus the running manner of the ANRW is suitable to the parallel computing, which could meet the requirement of large dataset mining. Finally, by the experiment analysis, the efficiency of ANRW on dynamic community detection is verified.

  11. Expectation-Maximization Algorithm Based System Identification of Multiscale Stochastic Models for Scale Recursive Estimation of Precipitation: Application to Model Validation and Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Gupta, R.; Venugopal, V.; Foufoula-Georgiou, E.

    2003-12-01

    Owing to the tremendous scale dependent variability of precipitation and discrepancies in scale or resolution among different types/sources of observations, comparing or merging observations at different scales, or validating Quantitative Precipitation Forecast (QPF) with observations is not trivial. Traditional methods of QPF (e.g., point to area) have been found deficient, and to alleviate some of the concerns, a new methodology called scale-recursive estimation (SRE) was introduced recently. This method, which has its root in Kalman filtering, can (i) handle disparate (in scale) measurement sources; (ii) account for observational uncertainty associated with each sensor; and (iii) incorporate a multiscale model (theoretical or empirical) which captures the observed scale-to-scale variability in precipitation. The result is an optimal (unbiased and minimum error variance) estimate at any desired scale along with its error statistics. Our preliminary studies have indicated that lognormal and bounded lognormal multiplicative cascades are the most successful candidates as state-propagation models for precipitation across a range of scales. However, the parameters of these models were found to be highly sensitive to the observed intermittency of precipitation fields. To address this problem, we have chosen to take a "system identification" approach instead of prescribing a priori the type of multiscale model. The first part of this work focuses on the use of Maximum Likelihood (ML) identification for estimating the parameters of a multiscale stochastic state space model directly from the given data. Expectation-Maximization (EM) algorithm is used to iteratively solve for ML estimates. The "expectation" step makes use of a Kalman smoother to estimate the state, while the "maximization" step re-estimates the parameters using these uncertain state estimates. Using high resolution forecast precipitation fields from ARPS (Advanced Regional Prediction System), concurrent

  12. Land cover and land use mapping of the iSimangaliso Wetland Park, South Africa: comparison of oblique and orthogonal random forest algorithms

    NASA Astrophysics Data System (ADS)

    Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad

    2016-01-01

    In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.

  13. Neural mechanism for stochastic behaviour during a competitive game.

    PubMed

    Soltani, Alireza; Lee, Daeyeol; Wang, Xiao-Jing

    2006-10-01

    Previous studies have shown that non-human primates can generate highly stochastic choice behaviour, especially when this is required during a competitive interaction with another agent. To understand the neural mechanism of such dynamic choice behaviour, we propose a biologically plausible model of decision making endowed with synaptic plasticity that follows a reward-dependent stochastic Hebbian learning rule. This model constitutes a biophysical implementation of reinforcement learning, and it reproduces salient features of behavioural data from an experiment with monkeys playing a matching pennies game. Due to interaction with an opponent and learning dynamics, the model generates quasi-random behaviour robustly in spite of intrinsic biases. Furthermore, non-random choice behaviour can also emerge when the model plays against a non-interactive opponent, as observed in the monkey experiment. Finally, when combined with a meta-learning algorithm, our model accounts for the slow drift in the animal's strategy based on a process of reward maximization.

  14. Sublinear scaling for time-dependent stochastic density functional theory

    SciTech Connect

    Gao, Yi; Neuhauser, Daniel; Baer, Roi; Rabani, Eran

    2015-01-21

    A stochastic approach to time-dependent density functional theory is developed for computing the absorption cross section and the random phase approximation (RPA) correlation energy. The core idea of the approach involves time-propagation of a small set of stochastic orbitals which are first projected on the occupied space and then propagated in time according to the time-dependent Kohn-Sham equations. The evolving electron density is exactly represented when the number of random orbitals is infinite, but even a small number (≈16) of such orbitals is enough to obtain meaningful results for absorption spectrum and the RPA correlation energy per electron. We implement the approach for silicon nanocrystals using real-space grids and find that the overall scaling of the algorithm is sublinear with computational time and memory.

  15. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  16. Efficient stochastic superparameterization for geophysical turbulence

    PubMed Central

    Grooms, Ian; Majda, Andrew J.

    2013-01-01

    Efficient computation of geophysical turbulence, such as occurs in the atmosphere and ocean, is a formidable challenge for the following reasons: the complex combination of waves, jets, and vortices; significant energetic backscatter from unresolved small scales to resolved large scales; a lack of dynamical scale separation between large and small scales; and small-scale instabilities, conditional on the large scales, which do not saturate. Nevertheless, efficient methods are needed to allow large ensemble simulations of sufficient size to provide meaningful quantifications of uncertainty in future predictions and past reanalyses through data assimilation and filtering. Here, a class of efficient stochastic superparameterization algorithms is introduced. In contrast to conventional superparameterization, the method here (i) does not require the simulation of nonlinear eddy dynamics on periodic embedded domains, (ii) includes a better representation of unresolved small-scale instabilities, and (iii) allows efficient representation of a much wider range of unresolved scales. The simplest algorithm implemented here radically improves efficiency by representing small-scale eddies at and below the limit of computational resolution by a suitable one-dimensional stochastic model of random-direction plane waves. In contrast to heterogeneous multiscale methods, the methods developed here do not require strong scale separation or conditional equilibration of local statistics. The simplest algorithm introduced here shows excellent performance on a difficult test suite of prototype problems for geophysical turbulence with waves, jets, and vortices, with a speedup of several orders of magnitude compared with direct simulation. PMID:23487800

  17. Stochastic operator-splitting method for reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Choi, TaiJung; Maurya, Mano Ram; Tartakovsky, Daniel M.; Subramaniam, Shankar

    2012-11-01

    Many biochemical processes at the sub-cellular level involve a small number of molecules. The local numbers of these molecules vary in space and time, and exhibit random fluctuations that can only be captured with stochastic simulations. We present a novel stochastic operator-splitting algorithm to model such reaction-diffusion phenomena. The reaction and diffusion steps employ stochastic simulation algorithms and Brownian dynamics, respectively. Through theoretical analysis, we have developed an algorithm to identify if the system is reaction-controlled, diffusion-controlled or is in an intermediate regime. The time-step size is chosen accordingly at each step of the simulation. We have used three examples to demonstrate the accuracy and robustness of the proposed algorithm. The first example deals with diffusion of two chemical species undergoing an irreversible bimolecular reaction. It is used to validate our algorithm by comparing its results with the solution obtained from a corresponding deterministic partial differential equation at low and high number of molecules. In this example, we also compare the results from our method to those obtained using a Gillespie multi-particle (GMP) method. The second example, which models simplified RNA synthesis, is used to study the performance of our algorithm in reaction- and diffusion-controlled regimes and to investigate the effects of local inhomogeneity. The third example models reaction-diffusion of CheY molecules through the cytoplasm of Escherichia coli during chemotaxis. It is used to compare the algorithm's performance against the GMP method. Our analysis demonstrates that the proposed algorithm enables accurate simulation of the kinetics of complex and spatially heterogeneous systems. It is also computationally more efficient than commonly used alternatives, such as the GMP method.

  18. Stochastic superparameterization in quasigeostrophic turbulence

    SciTech Connect

    Grooms, Ian; Majda, Andrew J.

    2014-08-15

    In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis’ stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and

  19. Stochastic superparameterization in quasigeostrophic turbulence

    NASA Astrophysics Data System (ADS)

    Grooms, Ian; Majda, Andrew J.

    2014-08-01

    In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis' stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and

  20. Stochastic games

    PubMed Central

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883

  1. Nonlinear optimization for stochastic simulations.

    SciTech Connect

    Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.

    2003-12-01

    This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.

  2. Development and Evaluation of a New Air Exchange Rate Algorithm for the Stochastic Human Exposure and Dose Simulation Model (ISES Presentation)

    EPA Science Inventory

    Previous exposure assessment panel studies have observed considerable seasonal, between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure ...

  3. Stochastic resonance

    NASA Astrophysics Data System (ADS)

    Gammaitoni, Luca; Hänggi, Peter; Jung, Peter; Marchesoni, Fabio

    1998-01-01

    Over the last two decades, stochastic resonance has continuously attracted considerable attention. The term is given to a phenomenon that is manifest in nonlinear systems whereby generally feeble input information (such as a weak signal) can be be amplified and optimized by the assistance of noise. The effect requires three basic ingredients: (i) an energetic activation barrier or, more generally, a form of threshold; (ii) a weak coherent input (such as a periodic signal); (iii) a source of noise that is inherent in the system, or that adds to the coherent input. Given these features, the response of the system undergoes resonance-like behavior as a function of the noise level; hence the name stochastic resonance. The underlying mechanism is fairly simple and robust. As a consequence, stochastic resonance has been observed in a large variety of systems, including bistable ring lasers, semiconductor devices, chemical reactions, and mechanoreceptor cells in the tail fan of a crayfish. In this paper, the authors report, interpret, and extend much of the current understanding of the theory and physics of stochastic resonance. They introduce the readers to the basic features of stochastic resonance and its recent history. Definitions of the characteristic quantities that are important to quantify stochastic resonance, together with the most important tools necessary to actually compute those quantities, are presented. The essence of classical stochastic resonance theory is presented, and important applications of stochastic resonance in nonlinear optics, solid state devices, and neurophysiology are described and put into context with stochastic resonance theory. More elaborate and recent developments of stochastic resonance theory are discussed, ranging from fundamental quantum properties-being important at low temperatures-over spatiotemporal aspects in spatially distributed systems, to realizations in chaotic maps. In conclusion the authors summarize the achievements

  4. Efficient asymmetric image authentication schemes based on photon counting-double random phase encoding and RSA algorithms.

    PubMed

    Moon, Inkyu; Yi, Faliu; Han, Mingu; Lee, Jieun

    2016-06-01

    Recently, double random phase encoding (DRPE) has been integrated with the photon counting (PC) imaging technique for the purpose of secure image authentication. In this scheme, the same key should be securely distributed and shared between the sender and receiver, but this is one of the most vexing problems of symmetric cryptosystems. In this study, we propose an efficient asymmetric image authentication scheme by combining the PC-DRPE and RSA algorithms, which solves key management and distribution problems. The retrieved image from the proposed authentication method contains photon-limited encrypted data obtained by means of PC-DRPE. Therefore, the original image can be protected while the retrieved image can be efficiently verified using a statistical nonlinear correlation approach. Experimental results demonstrate the feasibility of our proposed asymmetric image authentication method.

  5. [Study on prediction of compound-target-disease network of chuanxiong rhizoma based on random forest algorithm].

    PubMed

    Yuan, Jie; Li, Xiao-Jie; Chen, Chao; Song, Xiang-Gang; Wang, Shu-Mei

    2014-06-01

    To collect small molecule drugs and their drug target data such as enzymes, ion channels, G-protein-coupled receptors and nuclear receptors from KEGG database as the training sets, in order to establish drug-target interaction models based on the random forest algorithm. The accuracies of the models were evaluated by the 10-fold cross-validation test, showing that the predicted success rates of the four drug target models were 71.34%, 67.08%, 73.17% and 67.83%, respectively. The models were adopted to predict the targets of 26 chemical components and establish the compound-target-disease network. The results were well verified by literatures. The models established in this paper are highly accurate, and can be used to discover potential targets in other traditional Chinese medicine ingredients. PMID:25244771

  6. Efficient asymmetric image authentication schemes based on photon counting-double random phase encoding and RSA algorithms.

    PubMed

    Moon, Inkyu; Yi, Faliu; Han, Mingu; Lee, Jieun

    2016-06-01

    Recently, double random phase encoding (DRPE) has been integrated with the photon counting (PC) imaging technique for the purpose of secure image authentication. In this scheme, the same key should be securely distributed and shared between the sender and receiver, but this is one of the most vexing problems of symmetric cryptosystems. In this study, we propose an efficient asymmetric image authentication scheme by combining the PC-DRPE and RSA algorithms, which solves key management and distribution problems. The retrieved image from the proposed authentication method contains photon-limited encrypted data obtained by means of PC-DRPE. Therefore, the original image can be protected while the retrieved image can be efficiently verified using a statistical nonlinear correlation approach. Experimental results demonstrate the feasibility of our proposed asymmetric image authentication method. PMID:27411183

  7. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  8. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  9. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. Fock space, symbolic algebra, and analytical solutions for small stochastic systems.

    PubMed

    Santos, Fernando A N; Gadêlha, Hermes; Gaffney, Eamonn A

    2015-12-01

    Randomness is ubiquitous in nature. From single-molecule biochemical reactions to macroscale biological systems, stochasticity permeates individual interactions and often regulates emergent properties of the system. While such systems are regularly studied from a modeling viewpoint using stochastic simulation algorithms, numerous potential analytical tools can be inherited from statistical and quantum physics, replacing randomness due to quantum fluctuations with low-copy-number stochasticity. Nevertheless, classical studies remained limited to the abstract level, demonstrating a more general applicability and equivalence between systems in physics and biology rather than exploiting the physics tools to study biological systems. Here the Fock space representation, used in quantum mechanics, is combined with the symbolic algebra of creation and annihilation operators to consider explicit solutions for the chemical master equations describing small, well-mixed, biochemical, or biological systems. This is illustrated with an exact solution for a Michaelis-Menten single enzyme interacting with limited substrate, including a consideration of very short time scales, which emphasizes when stiffness is present even for small copy numbers. Furthermore, we present a general matrix representation for Michaelis-Menten kinetics with an arbitrary number of enzymes and substrates that, following diagonalization, leads to the solution of this ubiquitous, nonlinear enzyme kinetics problem. For this, a flexible symbolic maple code is provided, demonstrating the prospective advantages of this framework compared to stochastic simulation algorithms. This further highlights the possibilities for analytically based studies of stochastic systems in biology and chemistry using tools from theoretical quantum physics.

  11. Fock space, symbolic algebra, and analytical solutions for small stochastic systems

    NASA Astrophysics Data System (ADS)

    Santos, Fernando A. N.; Gadêlha, Hermes; Gaffney, Eamonn A.

    2015-12-01

    Randomness is ubiquitous in nature. From single-molecule biochemical reactions to macroscale biological systems, stochasticity permeates individual interactions and often regulates emergent properties of the system. While such systems are regularly studied from a modeling viewpoint using stochastic simulation algorithms, numerous potential analytical tools can be inherited from statistical and quantum physics, replacing randomness due to quantum fluctuations with low-copy-number stochasticity. Nevertheless, classical studies remained limited to the abstract level, demonstrating a more general applicability and equivalence between systems in physics and biology rather than exploiting the physics tools to study biological systems. Here the Fock space representation, used in quantum mechanics, is combined with the symbolic algebra of creation and annihilation operators to consider explicit solutions for the chemical master equations describing small, well-mixed, biochemical, or biological systems. This is illustrated with an exact solution for a Michaelis-Menten single enzyme interacting with limited substrate, including a consideration of very short time scales, which emphasizes when stiffness is present even for small copy numbers. Furthermore, we present a general matrix representation for Michaelis-Menten kinetics with an arbitrary number of enzymes and substrates that, following diagonalization, leads to the solution of this ubiquitous, nonlinear enzyme kinetics problem. For this, a flexible symbolic maple code is provided, demonstrating the prospective advantages of this framework compared to stochastic simulation algorithms. This further highlights the possibilities for analytically based studies of stochastic systems in biology and chemistry using tools from theoretical quantum physics.

  12. Fock space, symbolic algebra, and analytical solutions for small stochastic systems.

    PubMed

    Santos, Fernando A N; Gadêlha, Hermes; Gaffney, Eamonn A

    2015-12-01

    Randomness is ubiquitous in nature. From single-molecule biochemical reactions to macroscale biological systems, stochasticity permeates individual interactions and often regulates emergent properties of the system. While such systems are regularly studied from a modeling viewpoint using stochastic simulation algorithms, numerous potential analytical tools can be inherited from statistical and quantum physics, replacing randomness due to quantum fluctuations with low-copy-number stochasticity. Nevertheless, classical studies remained limited to the abstract level, demonstrating a more general applicability and equivalence between systems in physics and biology rather than exploiting the physics tools to study biological systems. Here the Fock space representation, used in quantum mechanics, is combined with the symbolic algebra of creation and annihilation operators to consider explicit solutions for the chemical master equations describing small, well-mixed, biochemical, or biological systems. This is illustrated with an exact solution for a Michaelis-Menten single enzyme interacting with limited substrate, including a consideration of very short time scales, which emphasizes when stiffness is present even for small copy numbers. Furthermore, we present a general matrix representation for Michaelis-Menten kinetics with an arbitrary number of enzymes and substrates that, following diagonalization, leads to the solution of this ubiquitous, nonlinear enzyme kinetics problem. For this, a flexible symbolic maple code is provided, demonstrating the prospective advantages of this framework compared to stochastic simulation algorithms. This further highlights the possibilities for analytically based studies of stochastic systems in biology and chemistry using tools from theoretical quantum physics. PMID:26764734

  13. Automatic segmentation of ground-glass opacities in lung CT images by using Markov random field-based algorithms.

    PubMed

    Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing; Zhang, Guozhen; Zhang, Jianguo

    2012-06-01

    Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.

  14. Applicability of random sequential adsorption algorithm for simulation of surface plasma polishing kinetics

    NASA Astrophysics Data System (ADS)

    Minárik, Stanislav; Vaňa, Dušan

    2015-11-01

    Applicability of random sequential adsorption (RSA) model for the material removal during a surface plasma polishing is discussed. The mechanical nature of plasma polishing process is taken into consideration in modified version of RSA model. During the plasma polishing the surface layer is aligned such that molecules of material are removed from the surface mechanically as a consequence of the surface deformation induced by plasma particles impact. We propose modification of RSA technique to describe the reduction of material on the surface provided that sequential character of molecules release from the surface is maintained throughout the polishing process. This empirical model is able to estimate depth profile of material density on the surface during the plasma polishing. We have shown that preliminary results obtained from this model are in good agreement with experimental results. We believe that molecular dynamics simulation of the polishing process, possibly also other types of surface treatment, can be based on this model. However influence of material parameters and processing conditions (including plasma characteristics) must be taken into account using appropriate model variables.

  15. Stochastic approximation methods-Powerful tools for simulation and optimization: A survey of some recent work on multi-agent systems and cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Yin, George; Wang, Le Yi; Zhang, Hongwei

    2014-12-01

    Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.

  16. Stochastic approximation methods-Powerful tools for simulation and optimization: A survey of some recent work on multi-agent systems and cyber-physical systems

    SciTech Connect

    Yin, George; Wang, Le Yi; Zhang, Hongwei

    2014-12-10

    Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.

  17. Comparison between WorldView-2 and SPOT-5 images in mapping the bracken fern using the random forest algorithm

    NASA Astrophysics Data System (ADS)

    Odindi, John; Adam, Elhadi; Ngubane, Zinhle; Mutanga, Onisimo; Slotow, Rob

    2014-01-01

    Plant species invasion is known to be a major threat to socioeconomic and ecological systems. Due to high cost and limited extents of urban green spaces, high mapping accuracy is necessary to optimize the management of such spaces. We compare the performance of the new-generation WorldView-2 (WV-2) and SPOT-5 images in mapping the bracken fern [Pteridium aquilinum (L) kuhn] in a conserved urban landscape. Using the random forest algorithm, grid-search approaches based on out-of-bag estimate error were used to determine the optimal ntree and mtry combinations. The variable importance and backward feature elimination techniques were further used to determine the influence of the image bands on mapping accuracy. Additionally, the value of the commonly used vegetation indices in enhancing the classification accuracy was tested on the better performing image data. Results show that the performance of the new WV-2 bands was better than that of the traditional bands. Overall classification accuracies of 84.72 and 72.22% were achieved for the WV-2 and SPOT images, respectively. Use of selected indices from the WV-2 bands increased the overall classification accuracy to 91.67%. The findings in this study show the suitability of the new generation in mapping the bracken fern within the often vulnerable urban natural vegetation cover types.

  18. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  19. The bi-objective stochastic covering tour problem

    PubMed Central

    Tricoire, Fabien; Graf, Alexandra; Gutjahr, Walter J.

    2012-01-01

    We formulate a bi-objective covering tour model with stochastic demand where the two objectives are given by (i) cost (opening cost for distribution centers plus routing cost for a fleet of vehicles) and (ii) expected uncovered demand. In the model, it is assumed that depending on the distance, a certain percentage of clients go from their homes to the nearest distribution center. An application in humanitarian logistics is envisaged. For the computational solution of the resulting bi-objective two-stage stochastic program with recourse, a branch-and-cut technique, applied to a sample-average version of the problem obtained from a fixed random sample of demand vectors, is used within an epsilon-constraint algorithm. Computational results on real-world data for rural communities in Senegal show the viability of the approach. PMID:23471203

  20. Stochastic Vorticity and Associated Filtering Theory

    SciTech Connect

    Amirdjanova, A.; Kallianpur, G.

    2002-12-19

    The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.

  1. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  2. A fast and memory-sparing probabilistic selection algorithm for the GPU

    SciTech Connect

    Monroe, Laura M; Wendelberger, Joanne; Michalak, Sarah

    2010-09-29

    A fast and memory-sparing probabilistic top-N selection algorithm is implemented on the GPU. This probabilistic algorithm gives a deterministic result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces both the memory requirements and the average time required for the algorithm. This algorithm is well-suited to more general parallel processors with multiple layers of memory hierarchy. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be especially useful for processors having a limited amount of fast memory available.

  3. Discrete Stochastic Simulation Methods for Chemically Reacting Systems

    PubMed Central

    Cao, Yang; Samuels, David C.

    2012-01-01

    Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that in reality chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods that are based on that theory. We focus on non-stiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's Stochastic Simulation Algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application. PMID:19216925

  4. Stochastic Microlensing: Mathematical Theory and Applications

    NASA Astrophysics Data System (ADS)

    Teguia, Alberto Mokak

    Stochastic microlensing is a central tool in probing dark matter on galactic scales. From first principles, we initiate the development of a mathematical theory of stochastic microlensing. We first construct a natural probability space for stochastic microlensing and characterize the general behaviour of the random time delay functions' random critical sets. Next we study stochastic microlensing in two distinct random microlensing scenarios: The uniform stars' distribution with constant mass spectrum and the spatial stars' distribution with general mass spectrum. For each scenario, we determine exact and asymptotic (in the large number of point masses limit) stochastic properties of the random time delay functions and associated random lensing maps and random shear tensors, including their moments and asymptotic density functions. We use these results to study certain random observables, such as random fixed lensed images, random bending angles, and random magnifications. These results are relevant to the theory of random fields and provide a platform for further generalizations as well as analytical limits for checking astrophysical studies of stochastic microlensing. Continuing our development of a mathematical theory of stochastic microlensing, we study the stochastic version of the Image Counting Problem, first considered in the non-random setting by Einstein and generalized by Petters. In particular, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images for a general random lensing scenario. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to the uniform stars' distribution random microlensing scenario, we calculate the asymptotic global

  5. Stochastic damage evolution in textile laminates

    NASA Technical Reports Server (NTRS)

    Dzenis, Yuris A.; Bogdanovich, Alexander E.; Pastore, Christopher M.

    1993-01-01

    A probabilistic model utilizing random material characteristics to predict damage evolution in textile laminates is presented. Model is based on a division of each ply into two sublaminas consisting of cells. The probability of cell failure is calculated using stochastic function theory and maximal strain failure criterion. Three modes of failure, i.e. fiber breakage, matrix failure in transverse direction, as well as matrix or interface shear cracking, are taken into account. Computed failure probabilities are utilized in reducing cell stiffness based on the mesovolume concept. A numerical algorithm is developed predicting the damage evolution and deformation history of textile laminates. Effect of scatter of fiber orientation on cell properties is discussed. Weave influence on damage accumulation is illustrated with the help of an example of a Kevlar/epoxy laminate.

  6. Network-based stochastic semisupervised learning.

    PubMed

    Silva, Thiago Christiano; Zhao, Liang

    2012-03-01

    Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.

  7. A non-stochastic iterative computational method to model light propagation in turbid media

    NASA Astrophysics Data System (ADS)

    McIntyre, Thomas J.; Zemp, Roger J.

    2015-03-01

    Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.

  8. Attainability analysis in stochastic controlled systems

    SciTech Connect

    Ryashko, Lev

    2015-03-10

    A control problem for stochastically forced nonlinear continuous-time systems is considered. We propose a method for construction of the regulator that provides a preassigned probabilistic distribution of random states in stochastic equilibrium. Geometric criteria of the controllability are obtained. Constructive technique for the specification of attainability sets is suggested.

  9. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  10. A scalable framework for the solution of stochastic inverse problems using a sparse grid collocation approach

    SciTech Connect

    Zabaras, N. Ganapathysubramanian, B.

    2008-04-20

    Experimental evidence suggests that the dynamics of many physical phenomena are significantly affected by the underlying uncertainties associated with variations in properties and fluctuations in operating conditions. Recent developments in stochastic analysis have opened the possibility of realistic modeling of such systems in the presence of multiple sources of uncertainties. These advances raise the possibility of solving the corresponding stochastic inverse problem: the problem of designing/estimating the evolution of a system in the presence of multiple sources of uncertainty given limited information. A scalable, parallel methodology for stochastic inverse/design problems is developed in this article. The representation of the underlying uncertainties and the resultant stochastic dependant variables is performed using a sparse grid collocation methodology. A novel stochastic sensitivity method is introduced based on multiple solutions to deterministic sensitivity problems. The stochastic inverse/design problem is transformed to a deterministic optimization problem in a larger-dimensional space that is subsequently solved using deterministic optimization algorithms. The design framework relies entirely on deterministic direct and sensitivity analysis of the continuum systems, thereby significantly enhancing the range of applicability of the framework for the design in the presence of uncertainty of many other systems usually analyzed with legacy codes. Various illustrative examples with multiple sources of uncertainty including inverse heat conduction problems in random heterogeneous media are provided to showcase the developed framework.

  11. A scalable framework for the solution of stochastic inverse problems using a sparse grid collocation approach

    NASA Astrophysics Data System (ADS)

    Zabaras, N.; Ganapathysubramanian, B.

    2008-04-01

    Experimental evidence suggests that the dynamics of many physical phenomena are significantly affected by the underlying uncertainties associated with variations in properties and fluctuations in operating conditions. Recent developments in stochastic analysis have opened the possibility of realistic modeling of such systems in the presence of multiple sources of uncertainties. These advances raise the possibility of solving the corresponding stochastic inverse problem: the problem of designing/estimating the evolution of a system in the presence of multiple sources of uncertainty given limited information. A scalable, parallel methodology for stochastic inverse/design problems is developed in this article. The representation of the underlying uncertainties and the resultant stochastic dependant variables is performed using a sparse grid collocation methodology. A novel stochastic sensitivity method is introduced based on multiple solutions to deterministic sensitivity problems. The stochastic inverse/design problem is transformed to a deterministic optimization problem in a larger-dimensional space that is subsequently solved using deterministic optimization algorithms. The design framework relies entirely on deterministic direct and sensitivity analysis of the continuum systems, thereby significantly enhancing the range of applicability of the framework for the design in the presence of uncertainty of many other systems usually analyzed with legacy codes. Various illustrative examples with multiple sources of uncertainty including inverse heat conduction problems in random heterogeneous media are provided to showcase the developed framework.

  12. Handling packet dropouts and random delays for unstable delayed processes in NCS by optimal tuning of PIλDμ controllers with evolutionary algorithms.

    PubMed

    Pan, Indranil; Das, Saptarshi; Gupta, Amitava

    2011-10-01

    The issues of stochastically varying network delays and packet dropouts in Networked Control System (NCS) applications have been simultaneously addressed by time domain optimal tuning of fractional order (FO) PID controllers. Different variants of evolutionary algorithms are used for the tuning process and their performances are compared. Also the effectiveness of the fractional order PI(λ)D(μ) controllers over their integer order counterparts is looked into. Two standard test bench plants with time delay and unstable poles which are encountered in process control applications are tuned with the proposed method to establish the validity of the tuning methodology. The proposed tuning methodology is independent of the specific choice of plant and is also applicable for less complicated systems. Thus it is useful in a wide variety of scenarios. The paper also shows the superiority of FOPID controllers over their conventional PID counterparts for NCS applications. PMID:21621208

  13. Discontinuity detection in multivariate space for stochastic simulations

    SciTech Connect

    Archibald, Rick Gelb, Anne Saxena, Rishu Xiu Dongbin

    2009-04-20

    Edge detection has traditionally been associated with detecting physical space jump discontinuities in one dimension, e.g. seismic signals, and two dimensions, e.g. digital images. Hence most of the research on edge detection algorithms is restricted to these contexts. High dimension edge detection can be of significant importance, however. For instance, stochastic variants of classical differential equations not only have variables in space/time dimensions, but additional dimensions are often introduced to the problem by the nature of the random inputs. The stochastic solutions to such problems sometimes contain discontinuities in the corresponding random space and a prior knowledge of jump locations can be very helpful in increasing the accuracy of the final solution. Traditional edge detection methods typically require uniform grid point distribution. They also often involve the computation of gradients and/or Laplacians, which can become very complicated to compute as the number of dimensions increases. The polynomial annihilation edge detection method, on the other hand, is more flexible in terms of its geometric specifications and is furthermore relatively easy to apply. This paper discusses the numerical implementation of the polynomial annihilation edge detection method to high dimensional functions that arise when solving stochastic partial differential equations.

  14. Discontinuity Detection in Multivariate Space for Stochastic Simulations

    SciTech Connect

    Archibald, Richard K; Gelb, Anne; Saxena, Rishu; Xiu, Dongbin

    2009-01-01

    Edge detection has traditionally been associated with detecting physical space jump discontinuities in one dimension, e.g. seismic signals, and two dimensions, e.g. digital images. Hence most of the research on edge detection algorithms is restricted to these contexts. High dimension edge detection can be of significant importance, however. For instance, stochastic variants of classical differential equations not only have variables in space/time dimensions, but additional dimensions are often introduced to the problem by the nature of the random inputs. The stochastic solutions to such problems sometimes contain discontinuities in the corresponding random space and a prior knowledge of jump locations can be very helpful in increasing the accuracy of the final solution. Traditional edge detection methods typically require uniform grid point distribution. They also often involve the computation of gradients and/or Laplacians, which can become very complicated to compute as the number of dimensions increases. The polynomial annihilation edge detection method, on the other hand, is more flexible in terms of its geometric specifications and is furthermore relatively easy to apply. This paper discusses the numerical implementation of the polynomial annihilation edge detection method to high dimensional functions that arise when solving stochastic partial differential equations.

  15. Stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  16. Attainability analysis in the stochastic sensitivity control

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina

    2015-02-01

    For nonlinear dynamic stochastic control system, we construct a feedback regulator that stabilises an equilibrium and synthesises a required dispersion of random states around this equilibrium. Our approach is based on the stochastic sensitivity functions technique. We focus on the investigation of attainability sets for 2-D systems. A detailed parametric description of the attainability domains for various types of control inputs for stochastic Brusselator is presented. It is shown that the new regulator provides a low level of stochastic sensitivity and can suppress oscillations of large amplitude.

  17. Circumspect descent prevails in solving random constraint satisfaction problems.

    PubMed

    Alava, Mikko; Ardelius, John; Aurell, Erik; Kaski, Petteri; Krishnamurthy, Supriya; Orponen, Pekka; Seitz, Sakari

    2008-10-01

    We study the performance of stochastic local search algorithms for random instances of the K-satisfiability (K-SAT) problem. We present a stochastic local search algorithm, ChainSAT, which moves in the energy landscape of a problem instance by never going upwards in energy. ChainSAT is a focused algorithm in the sense that it focuses on variables occurring in unsatisfied clauses. We show by extensive numerical investigations that ChainSAT and other focused algorithms solve large K-SAT instances almost surely in linear time, up to high clause-to-variable ratios alpha; for example, for K = 4 we observe linear-time performance well beyond the recently postulated clustering and condensation transitions in the solution space. The performance of ChainSAT is a surprise given that by design the algorithm gets trapped into the first local energy minimum it encounters, yet no such minima are encountered. We also study the geometry of the solution space as accessed by stochastic local search algorithms. PMID:18832149

  18. Time-Ordered Product Expansions for Computational Stochastic Systems Biology

    PubMed Central

    Mjolsness, Eric

    2013-01-01

    The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie’s Stochastic Simulation Algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differ-ential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion (TOPE) can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems. PMID:23735739

  19. Stochastic partial differential equations in turbulence related problems

    NASA Technical Reports Server (NTRS)

    Chow, P.-L.

    1978-01-01

    The theory of stochastic partial differential equations (PDEs) and problems relating to turbulence are discussed by employing the theories of Brownian motion and diffusion in infinite dimensions, functional differential equations, and functional integration. Relevant results in probablistic analysis, especially Gaussian measures in function spaces and the theory of stochastic PDEs of Ito type, are taken into account. Linear stochastic PDEs are analyzed through linearized Navier-Stokes equations with a random forcing. Stochastic equations for waves in random media as well as model equations in turbulent transport theory are considered. Markovian models in fully developed turbulence are discussed from a stochastic equation viewpoint.

  20. Stochastic Flow Cascades

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Shlesinger, Michael F.

    2012-01-01

    We introduce and explore a Stochastic Flow Cascade (SFC) model: A general statistical model for the unidirectional flow through a tandem array of heterogeneous filters. Examples include the flow of: (i) liquid through heterogeneous porous layers; (ii) shocks through tandem shot noise systems; (iii) signals through tandem communication filters. The SFC model combines together the Langevin equation, convolution filters and moving averages, and Poissonian randomizations. A comprehensive analysis of the SFC model is carried out, yielding closed-form results. Lévy laws are shown to universally emerge from the SFC model, and characterize both heavy tailed retention times (Noah effect) and long-ranged correlations (Joseph effect).

  1. Stochastic thermodynamics of resetting

    NASA Astrophysics Data System (ADS)

    Fuchs, Jaco; Goldt, Sebastian; Seifert, Udo

    2016-03-01

    Stochastic dynamics with random resetting leads to a non-equilibrium steady state. Here, we consider the thermodynamics of resetting by deriving the first and second law for resetting processes far from equilibrium. We identify the contributions to the entropy production of the system which arise due to resetting and show that they correspond to the rate with which information is either erased or created. Using Landauer's principle, we derive a bound on the amount of work that is required to maintain a resetting process. We discuss different regimes of resetting, including a Maxwell demon scenario where heat is extracted from a bath at constant temperature.

  2. Stochastic learning via optimizing the variational inequalities.

    PubMed

    Tao, Qing; Gao, Qian-Kun; Chu, De-Jun; Wu, Gao-Wei

    2014-10-01

    A wide variety of learning problems can be posed in the framework of convex optimization. Many efficient algorithms have been developed based on solving the induced optimization problems. However, there exists a gap between the theoretically unbeatable convergence rate and the practically efficient learning speed. In this paper, we use the variational inequality (VI) convergence to describe the learning speed. To this end, we avoid the hard concept of regret in online learning and directly discuss the stochastic learning algorithms. We first cast the regularized learning problem as a VI. Then, we present a stochastic version of alternating direction method of multipliers (ADMMs) to solve the induced VI. We define a new VI-criterion to measure the convergence of stochastic algorithms. While the rate of convergence for any iterative algorithms to solve nonsmooth convex optimization problems cannot be better than O(1/√t), the proposed stochastic ADMM (SADMM) is proved to have an O(1/t) VI-convergence rate for the l1-regularized hinge loss problems without strong convexity and smoothness. The derived VI-convergence results also support the viewpoint that the standard online analysis is too loose to analyze the stochastic setting properly. The experiments demonstrate that SADMM has almost the same performance as the state-of-the-art stochastic learning algorithms but its O(1/t) VI-convergence rate is capable of tightly characterizing the real learning speed.

  3. Bayesian Estimation and Inference Using Stochastic Electronics

    PubMed Central

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326

  4. Bayesian Estimation and Inference Using Stochastic Electronics.

    PubMed

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  5. Bayesian Estimation and Inference Using Stochastic Electronics.

    PubMed

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326

  6. Stochastic Simulation of Turing Patterns

    NASA Astrophysics Data System (ADS)

    Fu, Zheng-Ping; Xu, Xin-Hang; Wang, Hong-Li; Ouyang, Qi

    2008-04-01

    We investigate the effects of intrinsic noise on Turing pattern formation near the onset of bifurcation from the homogeneous state to Turing pattern in the reaction-diffusion Brusselator. By performing stochastic simulations of the master equation and using Gillespie's algorithm, we check the spatiotemporal behaviour influenced by internal noises. We demonstrate that the patterns of occurrence frequency for the reaction and diffusion processes are also spatially ordered and temporally stable. Turing patterns are found to be robust against intrinsic fluctuations. Stochastic simulations also reveal that under the influence of intrinsic noises, the onset of Turing instability is advanced in comparison to that predicted deterministically.

  7. Fluctuating currents in stochastic thermodynamics. II. Energy conversion and nonequilibrium response in kinesin models.

    PubMed

    Altaner, Bernhard; Wachtel, Artur; Vollmer, Jürgen

    2015-10-01

    Unlike macroscopic engines, the molecular machinery of living cells is strongly affected by fluctuations. Stochastic thermodynamics uses Markovian jump processes to model the random transitions between the chemical and configurational states of these biological macromolecules. A recently developed theoretical framework [A. Wachtel, J. Vollmer, and B. Altaner, Phys. Rev. E 92, 042132 (2015)] provides a simple algorithm for the determination of macroscopic currents and correlation integrals of arbitrary fluctuating currents. Here we use it to discuss energy conversion and nonequilibrium response in different models for the molecular motor kinesin. Methodologically, our results demonstrate the effectiveness of the algorithm in dealing with parameter-dependent stochastic models. For the concrete biophysical problem our results reveal two interesting features in experimentally accessible parameter regions: the validity of a nonequilibrium Green-Kubo relation at mechanical stalling as well as a negative differential mobility for superstalling forces.

  8. A Simple Stochastic Model for Generating Broken Cloud Optical Depth and Top Height Fields

    NASA Technical Reports Server (NTRS)

    Prigarin, Sergei M.; Marshak, Alexander

    2007-01-01

    A simple and fast algorithm for generating two correlated stochastic twodimensional (2D) cloud fields is described. The algorithm is illustrated with two broken cumulus cloud fields: cloud optical depth and cloud top height retrieved from Moderate Resolution Imaging Spectrometer (MODIS). Only two 2D fields are required as an input. The algorithm output is statistical realizations of these two fields with approximately the same correlation and joint distribution functions as the original ones. The major assumption of the algorithm is statistical isotropy of the fields. In contrast to fractals and the Fourier filtering methods frequently used for stochastic cloud modeling, the proposed method is based on spectral models of homogeneous random fields. For keeping the same probability density function as the (first) original field, the method of inverse distribution function is used. When the spatial distribution of the first field has been generated, a realization of the correlated second field is simulated using a conditional distribution matrix. This paper is served as a theoretical justification to the publicly available software that has been recently released by the authors and can be freely downloaded from http://i3rc.gsfc.nasa.gov/Public codes clouds.htm. Though 2D rather than full 3D, stochastic realizations of two correlated cloud fields that mimic statistics of given fields have proved to be very useful to study 3D radiative transfer features of broken cumulus clouds for better understanding of shortwave radiation and interpretation of the remote sensing retrievals.

  9. QB1 - Stochastic Gene Regulation

    SciTech Connect

    Munsky, Brian

    2012-07-23

    Summaries of this presentation are: (1) Stochastic fluctuations or 'noise' is present in the cell - Random motion and competition between reactants, Low copy, quantization of reactants, Upstream processes; (2) Fluctuations may be very important - Cell-to-cell variability, Cell fate decisions (switches), Signal amplification or damping, stochastic resonances; and (3) Some tools are available to mode these - Kinetic Monte Carlo simulations (SSA and variants), Moment approximation methods, Finite State Projection. We will see how modeling these reactions can tell us more about the underlying processes of gene regulation.

  10. Pharmacogenetics-based warfarin dosing algorithm decreases time to stable anticoagulation and the risk of major hemorrhage: an updated meta-analysis of randomized controlled trials.

    PubMed

    Wang, Zhi-Quan; Zhang, Rui; Zhang, Peng-Pai; Liu, Xiao-Hong; Sun, Jian; Wang, Jun; Feng, Xiang-Fei; Lu, Qiu-Fen; Li, Yi-Gang

    2015-04-01

    Warfarin is yet the most widely used oral anticoagulant for thromboembolic diseases, despite the recently emerged novel anticoagulants. However, difficulty in maintaining stable dose within the therapeutic range and subsequent serious adverse effects markedly limited its use in clinical practice. Pharmacogenetics-based warfarin dosing algorithm is a recently emerged strategy to predict the initial and maintaining dose of warfarin. However, whether this algorithm is superior over conventional clinically guided dosing algorithm remains controversial. We made a comparison of pharmacogenetics-based versus clinically guided dosing algorithm by an updated meta-analysis. We searched OVID MEDLINE, EMBASE, and the Cochrane Library for relevant citations. The primary outcome was the percentage of time in therapeutic range. The secondary outcomes were time to stable therapeutic dose and the risks of adverse events including all-cause mortality, thromboembolic events, total bleedings, and major bleedings. Eleven randomized controlled trials with 2639 participants were included. Our pooled estimates indicated that pharmacogenetics-based dosing algorithm did not improve percentage of time in therapeutic range [weighted mean difference, 4.26; 95% confidence interval (CI), -0.50 to 9.01; P = 0.08], but it significantly shortened the time to stable therapeutic dose (weighted mean difference, -8.67; 95% CI, -11.86 to -5.49; P < 0.00001). Additionally, pharmacogenetics-based algorithm significantly reduced the risk of major bleedings (odds ratio, 0.48; 95% CI, 0.23 to 0.98; P = 0.04), but it did not reduce the risks of all-cause mortality, total bleedings, or thromboembolic events. Our results suggest that pharmacogenetics-based warfarin dosing algorithm significantly improves the efficiency of International Normalized Ratio correction and reduces the risk of major hemorrhage.

  11. Analysis of stochastic effects in Kaldor-type business cycle discrete model

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Sysolyatina, Anna

    2016-07-01

    We study nonlinear stochastic phenomena in the discrete Kaldor model of business cycles. A numerical parametric analysis of stochastically forced attractors (equilibria, closed invariant curves, discrete cycles) of this model is performed using the stochastic sensitivity functions technique. A spatial arrangement of random states in stochastic attractors is modeled by confidence domains. The phenomenon of noise-induced transitions "chaos-order" is discussed.

  12. Principal axes for stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Vasconcelos, V. V.; Raischel, F.; Haase, M.; Peinke, J.; Wächter, M.; Lind, P. G.; Kleinhans, D.

    2011-09-01

    We introduce a general procedure for directly ascertaining how many independent stochastic sources exist in a complex system modeled through a set of coupled Langevin equations of arbitrary dimension. The procedure is based on the computation of the eigenvalues and the corresponding eigenvectors of local diffusion matrices. We demonstrate our algorithm by applying it to two examples of systems showing Hopf bifurcation. We argue that computing the eigenvectors associated to the eigenvalues of the diffusion matrix at local mesh points in the phase space enables one to define vector fields of stochastic eigendirections. In particular, the eigenvector associated to the lowest eigenvalue defines the path of minimum stochastic forcing in phase space, and a transform to a new coordinate system aligned with the eigenvectors can increase the predictability of the system.

  13. Implementation of Chord Length Sampling for Transport Through a Binary Stochastic Mixture

    SciTech Connect

    T.J. Donovan; T.M. Sutton; Y. Danon

    2002-11-18

    Neutron transport through a special case stochastic mixture is examined, in which spheres of constant radius are uniformly mixed in a matrix material. A Monte Carlo algorithm previously proposed and examined in 2-D has been implemented in a test version of MCNP. The Limited Chord Length Sampling (LCLS) technique provides a means for modeling a binary stochastic mixture as a cell in MCNP. When inside a matrix cell, LCLS uses chord-length sampling to sample the distance to the next stochastic sphere. After a surface crossing into a stochastic sphere, transport is treated explicitly until the particle exits or is killed. Results were computed for a simple model with two different fixed neutron source distributions and three sets of material number densities. Stochastic spheres were modeled as black absorbers and varying degrees of scattering were introduced in the matrix material. Tallies were computed using the LCLS capability and by averaging results obtained from multiple realizations of the random geometry. Results were compared for accuracy and figures of merit were compared to indicate the efficiency gain of the LCLS method over the benchmark method. Results show that LCLS provides very good accuracy if the scattering optical thickness of the matrix is small ({le} 1). Comparisons of figures of merit show an advantage to LCLS varying between factors of 141 and 5. LCLS efficiency and accuracy relative to the benchmark both decrease as scattering is increased in the matrix.

  14. Estimating the granularity coefficient of a Potts-Markov random field within a Markov chain Monte Carlo algorithm.

    PubMed

    Pereyra, Marcelo; Dobigeon, Nicolas; Batatia, Hadj; Tourneret, Jean-Yves

    2013-06-01

    This paper addresses the problem of estimating the Potts parameter β jointly with the unknown parameters of a Bayesian model within a Markov chain Monte Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem because performing inference on β requires computing the intractable normalizing constant of the Potts model. In the proposed MCMC method, the estimation of β is conducted using a likelihood-free Metropolis-Hastings algorithm. Experimental results obtained for synthetic data show that estimating β jointly with the other unknown parameters leads to estimation results that are as good as those obtained with the actual value of β. On the other hand, choosing an incorrect value of β can degrade estimation performance significantly. To illustrate the interest of this method, the proposed algorithm is successfully applied to real bidimensional SAR and tridimensional ultrasound images.

  15. Cubic-scaling algorithm and self-consistent field for the random-phase approximation with second-order screened exchange

    SciTech Connect

    Moussa, Jonathan E.

    2014-01-07

    The random-phase approximation with second-order screened exchange (RPA+SOSEX) is a model of electron correlation energy with two caveats: its accuracy depends on an arbitrary choice of mean field, and it scales as O(n{sup 5}) operations and O(n{sup 3}) memory for n electrons. We derive a new algorithm that reduces its scaling to O(n{sup 3}) operations and O(n{sup 2}) memory using controlled approximations and a new self-consistent field that approximates Brueckner coupled-cluster doubles theory with RPA+SOSEX, referred to as Brueckner RPA theory. The algorithm comparably reduces the scaling of second-order Møller-Plesset perturbation theory with smaller cost prefactors than RPA+SOSEX. Within a semiempirical model, we study H{sub 2} dissociation to test accuracy and H{sub n} rings to verify scaling.

  16. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  17. Stochastic Cooling

    SciTech Connect

    Blaskiewicz, M.

    2011-01-01

    Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.

  18. Fourier mode analysis of multigrid methods for partial differential equations with random coefficients

    SciTech Connect

    Seynaeve, Bert; Rosseel, Eveline; Nicolai, Bart; Vandewalle, Stefan . E-mail: Stefan.Vandewalle@cs.kuleuven.be

    2007-05-20

    Partial differential equations with random coefficients appear for example in reliability problems and uncertainty propagation models. Various approaches exist for computing the stochastic characteristics of the solution of such a differential equation. In this paper, we consider the spectral expansion approach. This method transforms the continuous model into a large discrete algebraic system. We study the convergence properties of iterative methods for solving this discretized system. We consider one-level and multi-level methods. The classical Fourier mode analysis technique is extended towards the stochastic case. This is done by taking the eigenstructure into account of a certain matrix that depends on the random structure of the problem. We show how the convergence properties depend on the particulars of the algorithm, on the discretization parameters and on the stochastic characteristics of the model. Numerical results are added to illustrate some of our theoretical findings.

  19. Stochastic uncertainty analysis for solute transport in randomly heterogeneous media using a Karhunen-Loève-based moment equation approach

    USGS Publications Warehouse

    Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao

    2007-01-01

    A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen-Loève-based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen-Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three-Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two-dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.

  20. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  1. Stochastic Wireless Channel Modeling, Estimation and Identification from Measurements

    SciTech Connect

    Olama, Mohammed M; Djouadi, Seddik M; Li, Yanyan

    2008-07-01

    This paper is concerned with stochastic modeling of wireless fading channels, parameter estimation, and system identification from measurement data. Wireless channels are represented by stochastic state-space form, whose parameters and state variables are estimated using the expectation maximization algorithm and Kalman filtering, respectively. The latter are carried out solely from received signal measurements. These algorithms estimate the channel inphase and quadrature components and identify the channel parameters recursively. The proposed algorithm is tested using measurement data, and the results are presented.

  2. Optimisation of simulations of stochastic processes by removal of opposing reactions.

    PubMed

    Spill, Fabian; Maini, Philip K; Byrne, Helen M

    2016-02-28

    Models invoking the chemical master equation are used in many areas of science, and, hence, their simulation is of interest to many researchers. The complexity of the problems at hand often requires considerable computational power, so a large number of algorithms have been developed to speed up simulations. However, a drawback of many of these algorithms is that their implementation is more complicated than, for instance, the Gillespie algorithm, which is widely used to simulate the chemical master equation, and can be implemented with a few lines of code. Here, we present an algorithm which does not modify the way in which the master equation is solved, but instead modifies the transition rates. It works for all models in which reversible reactions occur by replacing such reversible reactions with effective net reactions. Examples of such systems include reaction-diffusion systems, in which diffusion is modelled by a random walk. The random movement of particles between neighbouring sites is then replaced with a net random flux. Furthermore, as we modify the transition rates of the model, rather than its implementation on a computer, our method can be combined with existing algorithms that were designed to speed up simulations of the stochastic master equation. By focusing on some specific models, we show how our algorithm can significantly speed up model simulations while maintaining essential features of the original model. PMID:26931679

  3. Automated Flight Routing Using Stochastic Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Morando, Alex; Grabbe, Shon

    2010-01-01

    Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.

  4. Connecting the dots: Semi-analytical and random walk numerical solutions of the diffusion–reaction equation with stochastic initial conditions

    SciTech Connect

    Paster, Amir; Bolster, Diogo; Benson, David A.

    2014-04-15

    We study a system with bimolecular irreversible kinetic reaction A+B→∅ where the underlying transport of reactants is governed by diffusion, and the local reaction term is given by the law of mass action. We consider the case where the initial concentrations are given in terms of an average and a white noise perturbation. Our goal is to solve the diffusion–reaction equation which governs the system, and we tackle it with both analytical and numerical approaches. To obtain an analytical solution, we develop the equations of moments and solve them approximately. To obtain a numerical solution, we develop a grid-less Monte Carlo particle tracking approach, where diffusion is modeled by a random walk of the particles, and reaction is modeled by annihilation of particles. The probability of annihilation is derived analytically from the particles' co-location probability. We rigorously derive the relationship between the initial number of particles in the system and the amplitude of white noise represented by that number. This enables us to compare the particle simulations and the approximate analytical solution and offer an explanation of the late time discrepancies. - Graphical abstract:.

  5. Connecting the dots: Semi-analytical and random walk numerical solutions of the diffusion-reaction equation with stochastic initial conditions

    NASA Astrophysics Data System (ADS)

    Paster, Amir; Bolster, Diogo; Benson, David A.

    2014-04-01

    We study a system with bimolecular irreversible kinetic reaction A+B→∅ where the underlying transport of reactants is governed by diffusion, and the local reaction term is given by the law of mass action. We consider the case where the initial concentrations are given in terms of an average and a white noise perturbation. Our goal is to solve the diffusion-reaction equation which governs the system, and we tackle it with both analytical and numerical approaches. To obtain an analytical solution, we develop the equations of moments and solve them approximately. To obtain a numerical solution, we develop a grid-less Monte Carlo particle tracking approach, where diffusion is modeled by a random walk of the particles, and reaction is modeled by annihilation of particles. The probability of annihilation is derived analytically from the particles' co-location probability. We rigorously derive the relationship between the initial number of particles in the system and the amplitude of white noise represented by that number. This enables us to compare the particle simulations and the approximate analytical solution and offer an explanation of the late time discrepancies.

  6. Stochastic blind motion deblurring.

    PubMed

    Xiao, Lei; Gregson, James; Heide, Felix; Heidrich, Wolfgang

    2015-10-01

    Blind motion deblurring from a single image is a highly under-constrained problem with many degenerate solutions. A good approximation of the intrinsic image can, therefore, only be obtained with the help of prior information in the form of (often nonconvex) regularization terms for both the intrinsic image and the kernel. While the best choice of image priors is still a topic of ongoing investigation, this research is made more complicated by the fact that historically each new prior requires the development of a custom optimization method. In this paper, we develop a stochastic optimization method for blind deconvolution. Since this stochastic solver does not require the explicit computation of the gradient of the objective function and uses only efficient local evaluation of the objective, new priors can be implemented and tested very quickly. We demonstrate that this framework, in combination with different image priors produces results with Peak Signal-to-Noise Ratio (PSNR) values that match or exceed the results obtained by much more complex state-of-the-art blind motion deblurring algorithms. PMID:25974941

  7. Stochastic Aspects of Cardiac Arrhythmias

    NASA Astrophysics Data System (ADS)

    Lerma, Claudia; Krogh-Madsen, Trine; Guevara, Michael; Glass, Leon

    2007-07-01

    Abnormal cardiac rhythms (cardiac arrhythmias) often display complex changes over time that can have a random or haphazard appearance. Mathematically, these changes can on occasion be identified with bifurcations in difference or differential equation models of the arrhythmias. One source for the variability of these rhythms is the fluctuating environment. However, in the neighborhood of bifurcation points, the fluctuations induced by the stochastic opening and closing of individual ion channels in the cell membrane, which results in membrane noise, may lead to randomness in the observed dynamics. To illustrate this, we consider the effects of stochastic properties of ion channels on the resetting of pacemaker oscillations and on the generation of early afterdepolarizations. The comparison of the statistical properties of long records showing arrhythmias with the predictions from theoretical models should help in the identification of different mechanisms underlying cardiac arrhythmias.

  8. Joint inversion of marine seismic AVA and CSEM data using statistical rock-physics models and Markov random fields: Stochastic inversion of AVA and CSEM data

    SciTech Connect

    Chen, J.; Hoversten, G.M.

    2011-09-15

    Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy to derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.

  9. Behavioral Stochastic Resonance within the Human Brain

    NASA Astrophysics Data System (ADS)

    Kitajo, Keiichi; Nozaki, Daichi; Ward, Lawrence M.; Yamamoto, Yoshiharu

    2003-05-01

    We provide the first evidence that stochastic resonance within the human brain can enhance behavioral responses to weak sensory inputs. We asked subjects to adjust handgrip force to a slowly changing, subthreshold gray level signal presented to their right eye. Behavioral responses were optimized by presenting randomly changing gray levels separately to the left eye. The results indicate that observed behavioral stochastic resonance was mediated by neural activity within the human brain where the information from both eyes converges.

  10. Multiple Stochastic Point Processes in Gene Expression

    NASA Astrophysics Data System (ADS)

    Murugan, Rajamanickam

    2008-04-01

    We generalize the idea of multiple-stochasticity in chemical reaction systems to gene expression. Using Chemical Langevin Equation approach we investigate how this multiple-stochasticity can influence the overall molecular number fluctuations. We show that the main sources of this multiple-stochasticity in gene expression could be the randomness in transcription and translation initiation times which in turn originates from the underlying bio-macromolecular recognition processes such as the site-specific DNA-protein interactions and therefore can be internally regulated by the supra-molecular structural factors such as the condensation/super-coiling of DNA. Our theory predicts that (1) in case of gene expression system, the variances ( φ) introduced by the randomness in transcription and translation initiation-times approximately scales with the degree of condensation ( s) of DNA or mRNA as φ ∝ s -6. From the theoretical analysis of the Fano factor as well as coefficient of variation associated with the protein number fluctuations we predict that (2) unlike the singly-stochastic case where the Fano factor has been shown to be a monotonous function of translation rate, in case of multiple-stochastic gene expression the Fano factor is a turn over function with a definite minimum. This in turn suggests that the multiple-stochastic processes can also be well tuned to behave like a singly-stochastic point processes by adjusting the rate parameters.

  11. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  12. Digital simulation and modeling of nonlinear stochastic systems

    SciTech Connect

    Richardson, J M; Rowland, J R

    1981-04-01

    Digitally generated solutions of nonlinear stochastic systems are not unique but depend critically on the numerical integration algorithm used. Some theoretical and practical implications of this dependence are examined. The Ito-Stratonovich controversy concerning the solution of nonlinear stochastic systems is shown to be more than a theoretical debate on maintaining Markov properties as opposed to utilizing the computational rules of ordinary calculus. The theoretical arguments give rise to practical considerations in the formation and solution of discrete models from continuous stochastic systems. Well-known numerical integration algorithms are shown not only to provide different solutions for the same stochastic system but also to correspond to different stochastic integral definitions. These correspondences are proved by considering first and second moments of solutions that result from different integration algorithms and then comparing the moments to those arising from various stochastic integral definitions. This algorithm-dependence of solutions is in sharp contrast to the deterministic and linear stochastic cases in which unique solutions are determined by any convergent numerical algorithm. Consequences of the relationship between stochastic system solutions and simulation procedures are presented for a nonlinear filtering example. Monte Carlo simulations and statistical tests are applied to the example to illustrate the determining role which computational procedures play in generating solutions.

  13. PRBP: Prediction of RNA-Binding Proteins Using a Random Forest Algorithm Combined with an RNA-Binding Residue Predictor.

    PubMed

    Ma, Xin; Guo, Jing; Xiao, Ke; Sun, Xiao

    2015-01-01

    The prediction of RNA-binding proteins is an incredibly challenging problem in computational biology. Although great progress has been made using various machine learning approaches with numerous features, the problem is still far from being solved. In this study, we attempt to predict RNA-binding proteins directly from amino acid sequences. A novel approach, PRBP predicts RNA-binding proteins using the information of predicted RNA-binding residues in conjunction with a random forest based method. For a given protein, we first predict its RNA-binding residues and then judge whether the protein binds RNA or not based on information from that prediction. If the protein cannot be identified by the information associated with its predicted RNA-binding residues, then a novel random forest predictor is used to determine if the query protein is a RNA-binding protein. We incorporated features of evolutionary information combined with physicochemical features (EIPP) and amino acid composition feature to establish the random forest predictor. Feature analysis showed that EIPP contributed the most to the prediction of RNA-binding proteins. The results also showed that the information from the RNA-binding residue prediction improved the overall performance of our RNA-binding protein prediction. It is anticipated that the PRBP method will become a useful tool for identifying RNA-binding proteins. A PRBP Web server implementation is freely available at http://www.cbi.seu.edu.cn/PRBP/.

  14. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  15. AESS: Accelerated Exact Stochastic Simulation

    NASA Astrophysics Data System (ADS)

    Jenkins, David D.; Peterson, Gregory D.

    2011-12-01

    The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution

  16. Solving stochastic epidemiological models using computer algebra

    NASA Astrophysics Data System (ADS)

    Hincapie, Doracelly; Ospina, Juan

    2011-06-01

    Mathematical modeling in Epidemiology is an important tool to understand the ways under which the diseases are transmitted and controlled. The mathematical modeling can be implemented via deterministic or stochastic models. Deterministic models are based on short systems of non-linear ordinary differential equations and the stochastic models are based on very large systems of linear differential equations. Deterministic models admit complete, rigorous and automatic analysis of stability both local and global from which is possible to derive the algebraic expressions for the basic reproductive number and the corresponding epidemic thresholds using computer algebra software. Stochastic models are more difficult to treat and the analysis of their properties requires complicated considerations in statistical mathematics. In this work we propose to use computer algebra software with the aim to solve epidemic stochastic models such as the SIR model and the carrier-borne model. Specifically we use Maple to solve these stochastic models in the case of small groups and we obtain results that do not appear in standard textbooks or in the books updated on stochastic models in epidemiology. From our results we derive expressions which coincide with those obtained in the classical texts using advanced procedures in mathematical statistics. Our algorithms can be extended for other stochastic models in epidemiology and this shows the power of computer algebra software not only for analysis of deterministic models but also for the analysis of stochastic models. We also perform numerical simulations with our algebraic results and we made estimations for the basic parameters as the basic reproductive rate and the stochastic threshold theorem. We claim that our algorithms and results are important tools to control the diseases in a globalized world.

  17. Peculiarities of stochastic regime of Arctic ice cover time evolution over 1987-2014 from microwave satellite sounding on the basis of NASA team 2 algorithm

    NASA Astrophysics Data System (ADS)

    Raev, M. D.; Sharkov, E. A.; Tikhonov, V. V.; Repina, I. A.; Komarova, N. Yu.

    2015-12-01

    The GLOBAL-RT database (DB) is composed of long-term radio heat multichannel observation data received from DMSP F08-F17 satellites; it is permanently supplemented with new data on the Earth's exploration from the space department of the Space Research Institute, Russian Academy of Sciences. Arctic ice-cover areas for regions higher than 60° N latitude were calculated using the DB polar version and NASA Team 2 algorithm, which is widely used in foreign scientific literature. According to the analysis of variability of Arctic ice cover during 1987-2014, 2 months were selected when the Arctic ice cover was maximal (February) and minimal (September), and the average ice cover area was calculated for these months. Confidence intervals of the average values are in the 95-98% limits. Several approximations are derived for the time dependences of the ice-cover maximum and minimum over the period under study. Regression dependences were calculated for polynomials from the first degree (linear) to sextic. It was ascertained that the minimal root-mean-square error of deviation from the approximated curve sharply decreased for the biquadratic polynomial and then varied insignificantly: from 0.5593 for the polynomial of third degree to 0.4560 for the biquadratic polynomial. Hence, the commonly used strictly linear regression with a negative time gradient for the September Arctic ice cover minimum over 30 years should be considered incorrect.

  18. Stochastic inverse problems: Models and metrics

    SciTech Connect

    Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.

    2015-03-31

    In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds.

  19. Estimation of optical flow in airborne electro-optical sensors by stochastic approximation

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.

    1991-01-01

    The essence of motion or range estimation by passive electrooptical means is the ability to determine the correspondence of picture elements in pairs of image frames and to estimate their coordinates and their disparity (relative shifts) in the image plane of an electrooptical imaging sensor. The disparity can be in successive frames due to self-motion or in simultaneous frames of a stereo pair. A key issue is to provide these estimates on-line. This paper describes the theoretical background of such an interframe shift estimator. It is based on a stochastic gradient algorithm, specifically implementing a form of stochastic approximation, which can achieve rapid convergence of the shift estimate. Analytical and numerical simulation examples for random texture and isolated features validate the feasibility and the effectiveness of the estimator.

  20. Calculation of a double reactive azeotrope using stochastic optimization approaches

    NASA Astrophysics Data System (ADS)

    Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus

    2013-02-01

    An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.

  1. Stochastic Impulse Control of Non-Markovian Processes

    SciTech Connect

    Djehiche, Boualem; Hamadene, Said Hdhiri, Ibtissam

    2010-02-15

    We consider a class of stochastic impulse control problems of general stochastic processes i.e. not necessarily Markovian. Under fairly general conditions we establish existence of an optimal impulse control. We also prove existence of combined optimal stochastic and impulse control of a fairly general class of diffusions with random coefficients. Unlike, in the Markovian framework, we cannot apply quasi-variational inequalities techniques. We rather derive the main results using techniques involving reflected BSDEs and the Snell envelope.

  2. Neutronic analysis stochastic distribution of fuel particles in Very High Temperature Gas-Cooled Reactors

    NASA Astrophysics Data System (ADS)

    Ji, Wei

    The Very High Temperature Gas-Cooled Reactor (VHTR) is a promising candidate for Generation IV designs due to its inherent safety, efficiency, and its proliferation-resistant and waste minimizing fuel cycle. A number of these advantages stem from its unique fuel design, consisting of a stochastic mixture of tiny (0.78mm diameter) microspheres with multiple coatings. However, the microsphere fuel regions represent point absorbers for resonance energy neutrons, resulting in the "double heterogeneity" for particle fuel. Special care must be taken to analyze this fuel in order to predict the spatial and spectral dependence of the neutron population in a steady-state reactor configuration. The challenges are considerable and resist brute force computation: there are over 1010 microspheres in a typical reactor configuration, with no hope of identifying individual microspheres in this stochastic mixture. Moreover, when individual microspheres "deplete" (e.g., burn the fissile isotope U-235 or transmute the fertile isotope U-238 (eventually) to Pu-239), the stochastic time-dependent nature of the depletion compounds the difficulty posed by the stochastic spatial mixture of the fuel, resulting in a prohibitive computational challenge. The goal of this research is to develop a methodology to analyze particle fuel randomly distributed in the reactor, accounting for the kernel absorptions as well as the stochastic depletion of the fuel mixture. This Ph.D. dissertation will address these challenges by developing a methodology for analyzing particle fuel that will be accurate enough to properly model stochastic particle fuel in both static and time-dependent configurations and yet be efficient enough to be used for routine analyses. This effort includes creation of a new physical model, development of a simulation algorithm, and application to real reactor configurations.

  3. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  4. Stochastic kinetic mean field model

    NASA Astrophysics Data System (ADS)

    Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.

    2016-07-01

    This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.

  5. Mechanical Autonomous Stochastic Heat Engine.

    PubMed

    Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara

    2016-07-01

    Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.

  6. Mechanical Autonomous Stochastic Heat Engine

    NASA Astrophysics Data System (ADS)

    Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara

    2016-07-01

    Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.

  7. Mechanical Autonomous Stochastic Heat Engine.

    PubMed

    Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara

    2016-07-01

    Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir. PMID:27419553

  8. Roulette-wheel selection via stochastic acceptance

    NASA Astrophysics Data System (ADS)

    Lipowski, Adam; Lipowska, Dorota

    2012-03-01

    Roulette-wheel selection is a frequently used method in genetic and evolutionary algorithms or in modeling of complex networks. Existing routines select one of N individuals using search algorithms of O(N) or O(logN) complexity. We present a simple roulette-wheel selection algorithm, which typically has O(1) complexity and is based on stochastic acceptance instead of searching. We also discuss a hybrid version, which might be suitable for highly heterogeneous weight distributions, found, for example, in some models of complex networks. With minor modifications, the algorithm might also be used for sampling with fitness cut-off at a certain value or for sampling without replacement.

  9. Stochastic Analysis and Design of Heterogeneous Microstructural Materials System

    NASA Astrophysics Data System (ADS)

    Xu, Hongyi

    Advanced materials system refers to new materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to superior properties over the conventional materials. To accelerate the development of new advanced materials system, the objective of this dissertation is to develop a computational design framework and the associated techniques for design automation of microstructure materials systems, with an emphasis on addressing the uncertainties associated with the heterogeneity of microstructural materials. Five key research tasks are identified: design representation, design evaluation, design synthesis, material informatics and uncertainty quantification. Design representation of microstructure includes statistical characterization and stochastic reconstruction. This dissertation develops a new descriptor-based methodology, which characterizes 2D microstructures using descriptors of composition, dispersion and geometry. Statistics of 3D descriptors are predicted based on 2D information to enable 2D-to-3D reconstruction. An efficient sequential reconstruction algorithm is developed to reconstruct statistically equivalent random 3D digital microstructures. In design evaluation, a stochastic decomposition and reassembly strategy is developed to deal with the high computational costs and uncertainties induced by material heterogeneity. The properties of Representative Volume Elements (RVE) are predicted by stochastically reassembling SVE elements with stochastic properties into a coarse representation of the RVE. In design synthesis, a new descriptor-based design framework is developed, which integrates computational methods of microstructure characterization and reconstruction, sensitivity analysis, Design of Experiments (DOE), metamodeling and optimization the enable parametric optimization of the microstructure for achieving the desired material properties. Material informatics is studied to efficiently reduce the

  10. Stochastic cooling in RHIC

    SciTech Connect

    Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.

    2009-05-04

    After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.

  11. Estimation on the influence of uncertain parameters on stochastic thermal regime of embankment in permafrost regions

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhao, Xiaodong; Chen, Xing

    2016-07-01

    For embankments in permafrost regions, the soil properties and the upper boundary conditions are stochastic because of complex geological processes and changeable atmospheric environment. These stochastic parameters lead to the fact that conventional deterministic temperature field of embankment become stochastic. In order to estimate the influence of stochastic parameters on random temperature field for embankment in permafrost regions, a series of simulated tests are conducted in this study. We consider the soil properties as random fields and the upper boundary conditions as stochastic processes. Taking the variability of each stochastic parameter into account individually or concurrently, the corresponding random temperature fields are investigated by Neumann stochastic finite element method. The results show that both of the standard deviation under the embankment and the boundary increase with time when considering the stochastic effect of soil properties and boundary conditions. Stochastic boundary conditions and soil properties play a different role in random temperature field of embankment at different times. Each stochastic parameter has a different effect on random temperature field. These results can improve our understanding of the influence of stochastic parameters on random temperature field for embankment in permafrost regions.

  12. Numerical Stochastic Homogenization Method and Multiscale Stochastic Finite Element Method - A Paradigm for Multiscale Computation of Stochastic PDEs

    SciTech Connect

    X. Frank Xu

    2010-03-30

    Multiscale modeling of stochastic systems, or uncertainty quantization of multiscale modeling is becoming an emerging research frontier, with rapidly growing engineering applications in nanotechnology, biotechnology, advanced materials, and geo-systems, etc. While tremendous efforts have been devoted to either stochastic methods or multiscale methods, little combined work had been done on integration of multiscale and stochastic methods, and there was no method formally available to tackle multiscale problems involving uncertainties. By developing an innovative Multiscale Stochastic Finite Element Method (MSFEM), this research has made a ground-breaking contribution to the emerging field of Multiscale Stochastic Modeling (MSM) (Fig 1). The theory of MSFEM basically decomposes a boundary value problem of random microstructure into a slow scale deterministic problem and a fast scale stochastic one. The slow scale problem corresponds to common engineering modeling practices where fine-scale microstructure is approximated by certain effective constitutive constants, which can be solved by using standard numerical solvers. The fast scale problem evaluates fluctuations of local quantities due to random microstructure, which is important for scale-coupling systems and particularly those involving failure mechanisms. The Green-function-based fast-scale solver developed in this research overcomes the curse-of-dimensionality commonly met in conventional approaches, by proposing a random field-based orthogonal expansion approach. The MSFEM formulated in this project paves the way to deliver the first computational tool/software on uncertainty quantification of multiscale systems. The applications of MSFEM on engineering problems will directly enhance our modeling capability on materials science (composite materials, nanostructures), geophysics (porous media, earthquake), biological systems (biological tissues, bones, protein folding). Continuous development of MSFEM will

  13. Preoperative overnight parenteral nutrition (TPN) improves skeletal muscle protein metabolism indicated by microarray algorithm analyses in a randomized trial.

    PubMed

    Iresjö, Britt-Marie; Engström, Cecilia; Lundholm, Kent

    2016-06-01

    Loss of muscle mass is associated with increased risk of morbidity and mortality in hospitalized patients. Uncertainties of treatment efficiency by short-term artificial nutrition remain, specifically improvement of protein balance in skeletal muscles. In this study, algorithmic microarray analysis was applied to map cellular changes related to muscle protein metabolism in human skeletal muscle tissue during provision of overnight preoperative total parenteral nutrition (TPN). Twenty-two patients (11/group) scheduled for upper GI surgery due to malignant or benign disease received a continuous peripheral all-in-one TPN infusion (30 kcal/kg/day, 0.16 gN/kg/day) or saline infusion for 12 h prior operation. Biopsies from the rectus abdominis muscle were taken at the start of operation for isolation of muscle RNA RNA expression microarray analyses were performed with Agilent Sureprint G3, 8 × 60K arrays using one-color labeling. 447 mRNAs were differently expressed between study and control patients (P < 0.1). mRNAs related to ribosomal biogenesis, mRNA processing, and translation were upregulated during overnight nutrition; particularly anabolic signaling S6K1 (P < 0.01-0.1). Transcripts of genes associated with lysosomal degradation showed consistently lower expression during TPN while mRNAs for ubiquitin-mediated degradation of proteins as well as transcripts related to intracellular signaling pathways, PI3 kinase/MAPkinase, were either increased or decreased. In conclusion, muscle mRNA alterations during overnight standard TPN infusions at constant rate altered mRNAs associated with mTOR signaling; increased initiation of protein translation; and suppressed autophagy/lysosomal degradation of proteins. This indicates that overnight preoperative parenteral nutrition is effective to promote muscle protein metabolism. PMID:27273879

  14. Preoperative overnight parenteral nutrition (TPN) improves skeletal muscle protein metabolism indicated by microarray algorithm analyses in a randomized trial.

    PubMed

    Iresjö, Britt-Marie; Engström, Cecilia; Lundholm, Kent

    2016-06-01

    Loss of muscle mass is associated with increased risk of morbidity and mortality in hospitalized patients. Uncertainties of treatment efficiency by short-term artificial nutrition remain, specifically improvement of protein balance in skeletal muscles. In this study, algorithmic microarray analysis was applied to map cellular changes related to muscle protein metabolism in human skeletal muscle tissue during provision of overnight preoperative total parenteral nutrition (TPN). Twenty-two patients (11/group) scheduled for upper GI surgery due to malignant or benign disease received a continuous peripheral all-in-one TPN infusion (30 kcal/kg/day, 0.16 gN/kg/day) or saline infusion for 12 h prior operation. Biopsies from the rectus abdominis muscle were taken at the start of operation for isolation of muscle RNA RNA expression microarray analyses were performed with Agilent Sureprint G3, 8 × 60K arrays using one-color labeling. 447 mRNAs were differently expressed between study and control patients (P < 0.1). mRNAs related to ribosomal biogenesis, mRNA processing, and translation were upregulated during overnight nutrition; particularly anabolic signaling S6K1 (P < 0.01-0.1). Transcripts of genes associated with lysosomal degradation showed consistently lower expression during TPN while mRNAs for ubiquitin-mediated degradation of proteins as well as transcripts related to intracellular signaling pathways, PI3 kinase/MAPkinase, were either increased or decreased. In conclusion, muscle mRNA alterations during overnight standard TPN infusions at constant rate altered mRNAs associated with mTOR signaling; increased initiation of protein translation; and suppressed autophagy/lysosomal degradation of proteins. This indicates that overnight preoperative parenteral nutrition is effective to promote muscle protein metabolism.

  15. A termination criterion for parameter estimation in stochastic models in systems biology.

    PubMed

    Zimmer, Christoph; Sahle, Sven

    2015-11-01

    Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model. PMID:26360409

  16. A termination criterion for parameter estimation in stochastic models in systems biology.

    PubMed

    Zimmer, Christoph; Sahle, Sven

    2015-11-01

    Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model.

  17. Stochastic many-body perturbation theory for anharmonic molecular vibrations

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew R.; Hirata, So

    2014-08-01

    A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm-1 and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.

  18. Stochastic many-body perturbation theory for anharmonic molecular vibrations

    SciTech Connect

    Hermes, Matthew R.; Hirata, So

    2014-08-28

    A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm{sup −1} and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.

  19. Discrete analysis of stochastic NMR.II

    NASA Astrophysics Data System (ADS)

    Wong, S. T. S.; Rods, M. S.; Newmark, R. D.; Budinger, T. F.

    Stochastic NMR is an efficient technique for high-field in vivo imaging and spectroscopic studies where the peak RF power required may be prohibitively high for conventional pulsed NMR techniques. A stochastic NMR experiment excites the spin system with a sequence of RF pulses where the flip angles or the phases of the pulses are samples of a discrete stochastic process. In a previous paper the stochastic experiment was analyzed and analytic expressions for the input-output cross-correlations, average signal power, and signal spectral density were obtained for a general stochastic RF excitation. In this paper specific cases of excitation with random phase, fixed flip angle, and excitation with two random components in quadrature are analyzed. The input-output cross-correlation for these two types of excitations is shown to be Lorentzian. Line broadening is the only spectral distortion as the RF excitation power is increased. The systematic noise power is inversely proportional to the number of data points N used in the spectral reconstruction. The use of a complete maximum length sequence (MLS) may improve the signal-to-systematic-noise ratio by 20 dB relative to random binary excitation, but peculiar features in the higher-order autocorrelations of MLS cause noise-like distortion in the reconstructed spectra when the excitation power is high. The amount of noise-like distortion depends on the choice of the MLS generator.

  20. Automated classification of seismic sources in large database using random forest algorithm: First results at Piton de la Fournaise volcano (La Réunion).

    NASA Astrophysics Data System (ADS)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie

    2016-04-01

    In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied

  1. Stochastic scanning multiphoton multifocal microscopy.

    PubMed

    Jureller, Justin E; Kim, Hee Y; Scherer, Norbert F

    2006-04-17

    Multiparticle tracking with scanning confocal and multiphoton fluorescence imaging is increasingly important for elucidating biological function, as in the transport of intracellular cargo-carrying vesicles. We demonstrate a simple rapid-sampling stochastic scanning multifocal multiphoton microscopy (SS-MMM) fluorescence imaging technique that enables multiparticle tracking without specialized hardware at rates 1,000 times greater than conventional single point raster scanning. Stochastic scanning of a diffractive optic generated 10x10 hexagonal array of foci with a white noise driven galvanometer yields a scan pattern that is random yet space-filling. SS-MMM creates a more uniformly sampled image with fewer spatio-temporal artifacts than obtained by conventional or multibeam raster scanning. SS-MMM is verified by simulation and experimentally demonstrated by tracking microsphere diffusion in solution. PMID:19516485

  2. Least expected time paths in stochastic, time-varying transportation networks

    SciTech Connect

    Miller-Hooks, E.D.; Mahmassani, H.S.

    1999-06-01

    The authors consider stochastic, time-varying transportation networks, where the arc weights (arc travel times) are random variables with probability distribution functions that vary with time. Efficient procedures are widely available for determining least time paths in deterministic networks. In stochastic but time-invariant networks, least expected time paths can be determined by setting each random arc weight to its expected value and solving an equivalent deterministic problem. This paper addresses the problem of determining least expected time paths in stochastic, time-varying networks. Two procedures are presented. The first procedure determines the a priori least expected time paths from all origins to a single destination for each departure time in the peak period. The second procedure determines lower bounds on the expected times of these a priori least expected time paths. This procedure determines an exact solution for the problem where the driver is permitted to react to revealed travel times on traveled links en route, i.e. in a time-adaptive route choice framework. Modifications to each of these procedures for determining least expected cost (where cost is not necessarily travel time) paths and lower bounds on the expected costs of these paths are given. Extensive numerical tests are conducted to illustrate the algorithms` computational performance as well as the properties of the solution.

  3. Quantumness, Randomness and Computability

    NASA Astrophysics Data System (ADS)

    Solis, Aldo; Hirsch, Jorge G.

    2015-06-01

    Randomness plays a central role in the quantum mechanical description of our interactions. We review the relationship between the violation of Bell inequalities, non signaling and randomness. We discuss the challenge in defining a random string, and show that algorithmic information theory provides a necessary condition for randomness using Borel normality. We close with a view on incomputablity and its implications in physics.

  4. Resolution analysis by random probing

    NASA Astrophysics Data System (ADS)

    Simutė, S.; Fichtner, A.; van Leeuwen, T.

    2015-12-01

    We develop and apply methods for resolution analysis in tomography, based on stochastic probing of the Hessian or resolution operators. Key properties of our methods are (i) low algorithmic complexity and easy implementation, (ii) applicability to any tomographic technique, including full-waveform inversion and linearized ray tomography, (iii) applicability in any spatial dimension and to inversions with a large number of model parameters, (iv) low computational costs that are mostly a fraction of those required for synthetic recovery tests, and (v) the ability to quantify both spatial resolution and inter-parameter trade-offs. Using synthetic full-waveform inversions as benchmarks, we demonstrate that auto-correlations of random-model applications to the Hessian yield various resolution measures, including direction- and position-dependent resolution lengths, and the strength of inter-parameter mappings. We observe that the required number of random test models is around 5 in one, two and three dimensions. This means that the proposed resolution analyses are not only more meaningful than recovery tests but also computationally less expensive. We demonstrate the applicability of our method in 3D real-data full-waveform inversions for the western Mediterranean and Japan. In addition to tomographic problems, resolution analysis by random probing may be used in other inverse methods that constrain continuously distributed properties, including electromagnetic and potential-field inversions, as well as recently emerging geodynamic data assimilation.

  5. Numerical studies of the stochastic Korteweg-de Vries equation

    SciTech Connect

    Lin Guang; Grinberg, Leopold; Karniadakis, George Em . E-mail: gk@dam.brown.edu

    2006-04-10

    We present numerical solutions of the stochastic Korteweg-de Vries equation for three cases corresponding to additive time-dependent noise, multiplicative space-dependent noise and a combination of the two. We employ polynomial chaos for discretization in random space, and discontinuous Galerkin and finite difference for discretization in physical space. The accuracy of the stochastic solutions is investigated by comparing the first two moments against analytical and Monte Carlo simulation results. Of particular interest is the interplay of spatial discretization error with the stochastic approximation error, which is examined for different orders of spatial and stochastic approximation.

  6. Analysis of stochastically forced quasi-periodic attractors

    SciTech Connect

    Ryashko, Lev

    2015-11-30

    A problem of the analysis of stochastically forced quasi-periodic auto-oscillations of nonlinear dynamic systems is considered. A stationary distribution of random trajectories in the neighborhood of the corresponding deterministic attractor (torus) is studied. A parametric description of quadratic approximation of the quasipotential based on the stochastic sensitivity functions (SSF) technique is given. Using this technique, we analyse a dispersion of stochastic flows near the torus. For the case of two-torus in three-dimensional space, the stochastic sensitivity function is constructed.

  7. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the

  8. Stochastic solution to quantum dynamics

    NASA Technical Reports Server (NTRS)

    John, Sarah; Wilson, John W.

    1994-01-01

    The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.

  9. Constrained Stochastic Extended Redundancy Analysis.

    PubMed

    DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco

    2015-06-01

    We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA). PMID:24327066

  10. Constrained Stochastic Extended Redundancy Analysis.

    PubMed

    DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco

    2015-06-01

    We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA).

  11. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  12. Stochastic Differential Games with Asymmetric Information

    SciTech Connect

    Cardaliaguet, Pierre Rainer, Catherine

    2009-02-15

    We investigate a two-player zero-sum stochastic differential game in which the players have an asymmetric information on the random payoff. We prove that the game has a value and characterize this value in terms of dual viscosity solutions of some second order Hamilton-Jacobi equation.

  13. Evaluating total inorganic nitrogen in coastal waters through fusion of multi-temporal RADARSAT-2 and optical imagery using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Meiling; Liu, Xiangnan; Li, Jin; Ding, Chao; Jiang, Jiale

    2014-12-01

    Satellites routinely provide frequent, large-scale, near-surface views of many oceanographic variables pertinent to plankton ecology. However, the nutrient fertility of water can be challenging to detect accurately using remote sensing technology. This research has explored an approach to estimate the nutrient fertility in coastal waters through the fusion of synthetic aperture radar (SAR) images and optical images using the random forest (RF) algorithm. The estimation of total inorganic nitrogen (TIN) in the Hong Kong Sea, China, was used as a case study. In March of 2009 and May and August of 2010, a sequence of multi-temporal in situ data and CCD images from China's HJ-1 satellite and RADARSAT-2 images were acquired. Four sensitive parameters were selected as input variables to evaluate TIN: single-band reflectance, a normalized difference spectral index (NDSI) and HV and VH polarizations. The RF algorithm was used to merge the different input variables from the SAR and optical imagery to generate a new dataset (i.e., the TIN outputs). The results showed the temporal-spatial distribution of TIN. The TIN values decreased from coastal waters to the open water areas, and TIN values in the northeast area were higher than those found in the southwest region of the study area. The maximum TIN values occurred in May. Additionally, the estimation accuracy for estimating TIN was significantly improved when the SAR and optical data were used in combination rather than a single data type alone. This study suggests that this method of estimating nutrient fertility in coastal waters by effectively fusing data from multiple sensors is very promising.

  14. Mapping the distributions of C3 and C4 grasses in the mixed-grass prairies of southwest Oklahoma using the Random Forest classification algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Dong; de Beurs, Kirsten M.

    2016-05-01

    The objective of this paper is to demonstrate a new method to map the distributions of C3 and C4 grasses at 30 m resolution and over a 25-year period of time (1988-2013) by combining the Random Forest (RF) classification algorithm and patch stable areas identified using the spatial pattern analysis software FRAGSTATS. Predictor variables for RF classifications consisted of ten spectral variables, four soil edaphic variables and three topographic variables. We provided a confidence score in terms of obtaining pure land cover at each pixel location by retrieving the classification tree votes. Classification accuracy assessments and predictor variable importance evaluations were conducted based on a repeated stratified sampling approach. Results show that patch stable areas obtained from larger patches are more appropriate to be used as sample data pools to train and validate RF classifiers for historical land cover mapping purposes and it is more reasonable to use patch stable areas as sample pools to map land cover in a year closer to the present rather than years further back in time. The percentage of obtained high confidence prediction pixels across the study area ranges from 71.18% in 1988 to 73.48% in 2013. The repeated stratified sampling approach is necessary in terms of reducing the positive bias in the estimated classification accuracy caused by the possible selections of training and validation pixels from the same patch stable areas. The RF classification algorithm was able to identify the important environmental factors affecting the distributions of C3 and C4 grasses in our study area such as elevation, soil pH, soil organic matter and soil texture.

  15. Method to describe stochastic dynamics using an optimal coordinate.

    PubMed

    Krivov, Sergei V

    2013-12-01

    A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function. PMID:24483410

  16. Characterizing stand-level forest canopy cover and height using Landsat time series, samples of airborne LiDAR, and the Random Forest algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.

    2015-03-01

    Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.

  17. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  18. Non-random structures in universal compression and the Fermi paradox

    NASA Astrophysics Data System (ADS)

    Gurzadyan, A. V.; Allahverdyan, A. E.

    2016-02-01

    We study the hypothesis of information panspermia assigned recently among possible solutions of the Fermi paradox ("where are the aliens?"). It suggests that the expenses of alien signaling can be significantly reduced, if their messages contained compressed information. To this end we consider universal compression and decoding mechanisms ( e.g. the Lempel-Ziv-Welch algorithm) that can reveal non-random structures in compressed bit strings. The efficiency of the Kolmogorov stochasticity parameter for detection of non-randomness is illustrated, along with the Zipf's law. The universality of these methods, i.e. independence from data details, can be principal in searching for intelligent messages.

  19. Delayed stochastic control

    NASA Astrophysics Data System (ADS)

    Hosaka, Tadaaki; Ohira, Toru; Lucian, Christian; Milton, John

    2005-03-01

    Time-delayed feedback control becomes problematic in situations in which the time constant of the system is fast compared to the feedback reaction time. In particular, when perturbations are unpredictable, traditional feedback or feed-forward control schemes can be insufficient. Nonethless a human can balance a stick at their fingertip in the presence of fluctuations that occur on time scales shorter than their neural reaction times. Here we study a simple model of a repulsive delayed random walk and demonstrate that the interplay between noise and delay can transiently stabilize an unstable fixed-point. This observation leads to the concept of ``delayed stochastic control,'' i.e. stabilization of tasks, such as stick balancing at the fingertip, by optimally tuning the noise level with respect to the feedback delay time. References:(1)J.L.Cabrera and J.G.Milton, PRL 89 158702 (2002);(2) T. Ohira and J.G.Milton, PRE 52 3277 (1995);(3)T.Hosaka, T.Ohira, C.Lucian, J.L.Cabrera, and J.G.Milton, Prog. Theor. Phys. (to appear).

  20. Fluctuations as stochastic deformation

    NASA Astrophysics Data System (ADS)

    Kazinski, P. O.

    2008-04-01

    A notion of stochastic deformation is introduced and the corresponding algebraic deformation procedure is developed. This procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). This method is demonstrated on diverse relativistic and nonrelativistic models with finite and infinite degrees of freedom. It is shown that under stochastic deformation the model of a nonrelativistic particle interacting with the electromagnetic field on a curved background passes into the stochastic model described by the Fokker-Planck equation with the diffusion tensor being the inverse metric tensor. The first stochastic correction to the Newton equations for this system is found. The Klein-Kramers equation is also derived as the stochastic deformation of a certain classical model. Relativistic generalizations of the Fokker-Planck and Klein-Kramers equations are obtained by applying the procedure of stochastic deformation to appropriate relativistic classical models. The analog of the Fokker-Planck equation associated with the stochastic Lorentz-Dirac equation is derived too. The stochastic deformation of the models of a free scalar field and an electromagnetic field is investigated. It turns out that in the latter case the obtained stochastic model describes a fluctuating electromagnetic field in a transparent medium.

  1. Fluctuations as stochastic deformation.

    PubMed

    Kazinski, P O

    2008-04-01

    A notion of stochastic deformation is introduced and the corresponding algebraic deformation procedure is developed. This procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). This method is demonstrated on diverse relativistic and nonrelativistic models with finite and infinite degrees of freedom. It is shown that under stochastic deformation the model of a nonrelativistic particle interacting with the electromagnetic field on a curved background passes into the stochastic model described by the Fokker-Planck equation with the diffusion tensor being the inverse metric tensor. The first stochastic correction to the Newton equations for this system is found. The Klein-Kramers equation is also derived as the stochastic deformation of a certain classical model. Relativistic generalizations of the Fokker-Planck and Klein-Kramers equations are obtained by applying the procedure of stochastic deformation to appropriate relativistic classical models. The analog of the Fokker-Planck equation associated with the stochastic Lorentz-Dirac equation is derived too. The stochastic deformation of the models of a free scalar field and an electromagnetic field is investigated. It turns out that in the latter case the obtained stochastic model describes a fluctuating electromagnetic field in a transparent medium.

  2. Stochastic solution of population balance equations for reactor networks

    SciTech Connect

    Menz, William J.; Akroyd, Jethro; Kraft, Markus

    2014-01-01

    This work presents a sequential modular approach to solve a generic network of reactors with a population balance model using a stochastic numerical method. Full-coupling to the gas-phase is achieved through operator-splitting. The convergence of the stochastic particle algorithm in test networks is evaluated as a function of network size, recycle fraction and numerical parameters. These test cases are used to identify methods through which systematic and statistical error may be reduced, including by use of stochastic weighted algorithms. The optimal algorithm was subsequently used to solve a one-dimensional example of silicon nanoparticle synthesis using a multivariate particle model. This example demonstrated the power of stochastic methods in resolving particle structure by investigating the transient and spatial evolution of primary polydispersity, degree of sintering and TEM-style images. Highlights: •An algorithm is presented to solve reactor networks with a population balance model. •A stochastic method is used to solve the population balance equations. •The convergence and efficiency of the reported algorithms are evaluated. •The algorithm is applied to simulate silicon nanoparticle synthesis in a 1D reactor. •Particle structure is reported as a function of reactor length and time.

  3. Variance-based sensitivity indices for stochastic models with correlated inputs

    SciTech Connect

    Kala, Zdeněk

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  4. The Influence of Random Element Displacement on DOA Estimates Obtained with (Khatri–Rao-)Root-MUSIC

    PubMed Central

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-01-01

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case. PMID:25393783

  5. Random walk particle tracking simulations of non-Fickian transport in heterogeneous media

    SciTech Connect

    Srinivasan, G. Tartakovsky, D.M. Dentz, M. Viswanathan, H.; Berkowitz, B.; Robinson, B.A.

    2010-06-01

    Derivations of continuum nonlocal models of non-Fickian (anomalous) transport require assumptions that might limit their applicability. We present a particle-based algorithm, which obviates the need for many of these assumptions by allowing stochastic processes that represent spatial and temporal random increments to be correlated in space and time, be stationary or non-stationary, and to have arbitrary distributions. The approach treats a particle trajectory as a subordinated stochastic process that is described by a set of Langevin equations, which represent a continuous time random walk (CTRW). Convolution-based particle tracking (CBPT) is used to increase the computational efficiency and accuracy of these particle-based simulations. The combined CTRW-CBPT approach enables one to convert any particle tracking legacy code into a simulator capable of handling non-Fickian transport.

  6. Adaptive path planning: Algorithm and analysis

    SciTech Connect

    Chen, Pang C.

    1995-03-01

    To address the need for a fast path planner, we present a learning algorithm that improves path planning by using past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions difficult tasks. From these solutions, an evolving sparse work of useful robot configurations is learned to support faster planning. More generally, the algorithm provides a framework in which a slow but effective planner may be improved both cost-wise and capability-wise by a faster but less effective planner coupled with experience. We analyze algorithm by formalizing the concept of improvability and deriving conditions under which a planner can be improved within the framework. The analysis is based on two stochastic models, one pessimistic (on task complexity), the other randomized (on experience utility). Using these models, we derive quantitative bounds to predict the learning behavior. We use these estimation tools to characterize the situations in which the algorithm is useful and to provide bounds on the training time. In particular, we show how to predict the maximum achievable speedup. Additionally, our analysis techniques are elementary and should be useful for studying other types of probabilistic learning as well.

  7. Stochastic Convection Parameterizations

    NASA Technical Reports Server (NTRS)

    Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios

    2012-01-01

    computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts

  8. A Stochastic Employment Problem

    ERIC Educational Resources Information Center

    Wu, Teng

    2013-01-01

    The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…

  9. ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1986-01-01

    A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.

  10. Fokker-Planck response of stochastic satellites

    NASA Technical Reports Server (NTRS)

    Huang, T. C.; Das, A.

    1982-01-01

    The present investigation is concerned with the effects of stochastic geometry and random environmental torques on the pointing accuracy of spinning and three-axis stabilized satellites. The study of pointing accuracies requires a knowledge of the rates of error growth over and above any criteria for the asymptotic stability of the satellites. For this reason the investigation is oriented toward the determination of the statistical properties of the responses of the satellites. The geometries of the satellites are considered stochastic so as to have a phenomenological model of the motions of the flexible structural elements of the satellites. A widely used method of solving stochastic equations is the Fokker-Planck approach where the equations are assumed to define a Markoff process and the transition probability densities of the responses are computed directly as a function of time. The Fokker-Planck formulation is used to analyze the response vector of a rigid satellite.

  11. A heterogeneous stochastic FEM framework for elliptic PDEs

    SciTech Connect

    Hou, Thomas Y. Liu, Pengfei

    2015-01-15

    We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage.

  12. Improving the detection sensitivity of chromatography by stochastic resonance.

    PubMed

    Zhang, Wei; Guo, Jianru; Xiang, Bingren; Fan, Hongyan; Xu, Fengguo

    2014-05-01

    Improving the detection sensitivity of analytical instruments has been a challenging task for chemometricians since undetectability has been almost unavoidable in trace analysis, even under optimized experimental conditions and with the use of modern instruments. Various chemometrics methods have been developed which attempt to address this detection problem but with limited success (e.g., fast Fourier transform and wavelet transform). However, the application of stochastic resonance (SR) creates an entirely new and effective methodology. Stochastic resonance is a phenomenon which is manifested in non-linear systems where a weak signal can be amplified and optimized with the assistance of noise. In this review, we summarize the use of basic SR, optimization of parameters and its modifications, including periodic modulation stochastic resonance (PSRA), linear modulation stochastic resonance (LSRA), single-well potential stochastic resonance (SSR) and the Duffing oscillator algorithm (DOA) for amplifying sub-threshold small signals. We also review the advantages and the disadvantages of various SR procedures. PMID:24622614

  13. Optimal sensor selection for noisy binary detection in stochastic pooling networks

    NASA Astrophysics Data System (ADS)

    McDonnell, Mark D.; Li, Feng; Amblard, P.-O.; Grant, Alex J.

    2013-08-01

    Stochastic Pooling Networks (SPNs) are a useful model for understanding and explaining how naturally occurring encoding of stochastic processes can occur in sensor systems ranging from macroscopic social networks to neuron populations and nanoscale electronics. Due to the interaction of nonlinearity, random noise, and redundancy, SPNs support various unexpected emergent features, such as suprathreshold stochastic resonance, but most existing mathematical results are restricted to the simplest case where all sensors in a network are identical. Nevertheless, numerical results on information transmission have shown that in the presence of independent noise, the optimal configuration of a SPN is such that there should be partial heterogeneity in sensor parameters, such that the optimal solution includes clusters of identical sensors, where each cluster has different parameter values. In this paper, we consider a SPN model of a binary hypothesis detection task and show mathematically that the optimal solution for a specific bound on detection performance is also given by clustered heterogeneity, such that measurements made by sensors with identical parameters either should all be excluded from the detection decision or all included. We also derive an algorithm for numerically finding the optimal solution and illustrate its utility with several examples, including a model of parallel sensory neurons with Poisson firing characteristics.

  14. Identification and stochastic control of helicopter dynamic modes

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.; Bar-Shalom, Y.

    1983-01-01

    A general treatment of parameter identification and stochastic control for use on helicopter dynamic systems is presented. Rotor dynamic models, including specific applications to rotor blade flapping and the helicopter ground resonance problem are emphasized. Dynamic systems which are governed by periodic coefficients as well as constant coefficient models are addressed. The dynamic systems are modeled by linear state variable equations which are used in the identification and stochastic control formulation. The pure identification problem as well as the stochastic control problem which includes combined identification and control for dynamic systems is addressed. The stochastic control problem includes the effect of parameter uncertainty on the solution and the concept of learning and how this is affected by the control's duel effect. The identification formulation requires algorithms suitable for on line use and thus recursive identification algorithms are considered. The applications presented use the recursive extended kalman filter for parameter identification which has excellent convergence for systems without process noise.

  15. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    SciTech Connect

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-20

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the

  16. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    NASA Astrophysics Data System (ADS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low

  17. A stochastic approach to open quantum systems.

    PubMed

    Biele, R; D'Agosta, R

    2012-07-11

    Stochastic methods are ubiquitous to a variety of fields, ranging from physics to economics and mathematics. In many cases, in the investigation of natural processes, stochasticity arises every time one considers the dynamics of a system in contact with a somewhat bigger system, an environment with which it is considered in thermal equilibrium. Any small fluctuation of the environment has some random effect on the system. In physics, stochastic methods have been applied to the investigation of phase transitions, thermal and electrical noise, thermal relaxation, quantum information, Brownian motion and so on. In this review, we will focus on the so-called stochastic Schrödinger equation. This is useful as a starting point to investigate the dynamics of open quantum systems capable of exchanging energy and momentum with an external environment. We discuss in some detail the general derivation of a stochastic Schrödinger equation and some of its recent applications to spin thermal transport, thermal relaxation, and Bose-Einstein condensation. We thoroughly discuss the advantages of this formalism with respect to the more common approach in terms of the reduced density matrix. The applications discussed here constitute only a few examples of a much wider range of applicability.

  18. StochPy: A Comprehensive, User-Friendly Tool for Simulating Stochastic Biological Processes

    PubMed Central

    Maarleveld, Timo R.; Olivier, Brett G.; Bruggeman, Frank J.

    2013-01-01

    Single-cell and single-molecule measurements indicate the importance of stochastic phenomena in cell biology. Stochasticity creates spontaneous differences in the copy numbers of key macromolecules and the timing of reaction events between genetically-identical cells. Mathematical models are indispensable for the study of phenotypic stochasticity in cellular decision-making and cell survival. There is a demand for versatile, stochastic modeling environments with extensive, preprogrammed statistics functions and plotting capabilities that hide the mathematics from the novice users and offers low-level programming access to the experienced user. Here we present StochPy (Stochastic modeling in Python), which is a flexible software tool for stochastic simulation in cell biology. It provides various stochastic simulation algorithms, SBML support, analyses of the probability distributions of molecule copy numbers and event waiting times, analyses of stochastic time series, and a range of additional statistical functions and plotting facilities for stochastic simulations. We illustrate the functionality of StochPy with stochastic models of gene expression, cell division, and single-molecule enzyme kinetics. StochPy has been successfully tested against the SBML stochastic test suite, passing all tests. StochPy is a comprehensive software package for stochastic simulation of the molecular control networks of living cells. It allows novice and experienced users to study stochastic phenomena in cell biology. The integration with other Python software makes StochPy both a user-friendly and easily extendible simulation tool. PMID:24260203

  19. StochPy: a comprehensive, user-friendly tool for simulating stochastic biological processes.

    PubMed

    Maarleveld, Timo R; Olivier, Brett G; Bruggeman, Frank J

    2013-01-01

    Single-cell and single-molecule measurements indicate the importance of stochastic phenomena in cell biology. Stochasticity creates spontaneous differences in the copy numbers of key macromolecules and the timing of reaction events between genetically-identical cells. Mathematical models are indispensable for the study of phenotypic stochasticity in cellular decision-making and cell survival. There is a demand for versatile, stochastic modeling environments with extensive, preprogrammed statistics functions and plotting capabilities that hide the mathematics from the novice users and offers low-level programming access to the experienced user. Here we present StochPy (Stochastic modeling in Python), which is a flexible software tool for stochastic simulation in cell biology. It provides various stochastic simulation algorithms, SBML support, analyses of the probability distributions of molecule copy numbers and event waiting times, analyses of stochastic time series, and a range of additional statistical functions and plotting facilities for stochastic simulations. We illustrate the functionality of StochPy with stochastic models of gene expression, cell division, and single-molecule enzyme kinetics. StochPy has been successfully tested against the SBML stochastic test suite, passing all tests. StochPy is a comprehensive software package for stochastic simulation of the molecular control networks of living cells. It allows novice and experienced users to study stochastic phenomena in cell biology. The integration with other Python software makes StochPy both a user-friendly and easily extendible simulation tool.

  20. Experimental evidence of quantum randomness incomputability

    SciTech Connect

    Calude, Cristian S.; Dinneen, Michael J.; Dumitrescu, Monica; Svozil, Karl

    2010-08-15

    In contrast with software-generated randomness (called pseudo-randomness), quantum randomness can be proven incomputable; that is, it is not exactly reproducible by any algorithm. We provide experimental evidence of incomputability--an asymptotic property--of quantum randomness by performing finite tests of randomness inspired by algorithmic information theory.