NASA Astrophysics Data System (ADS)
Dai, Liyi
2016-05-01
Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, by approximating gradient using finite differences generated through common random numbers. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n-2/5 in general and to n-1/2, the best possible rate of stochastic approximation, in Monte Carlo optimization for a broad class of problems, in the iteration number n.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Pan, Indranil; Das, Saptarshi; Gupta, Amitava
2011-01-01
An optimal PID and an optimal fuzzy PID have been tuned by minimizing the Integral of Time multiplied Absolute Error (ITAE) and squared controller output for a networked control system (NCS). The tuning is attempted for a higher order and a time delay system using two stochastic algorithms viz. the Genetic Algorithm (GA) and two variants of Particle Swarm Optimization (PSO) and the closed loop performances are compared. The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Segmentation of stochastic images with a stochastic random walker method.
Pätz, Torben; Preusser, Tobias
2012-05-01
We present an extension of the random walker segmentation to images with uncertain gray values. Such gray-value uncertainty may result from noise or other imaging artifacts or more general from measurement errors in the image acquisition process. The purpose is to quantify the influence of the gray-value uncertainty onto the result when using random walker segmentation. In random walker segmentation, a weighted graph is built from the image, where the edge weights depend on the image gradient between the pixels. For given seed regions, the probability is evaluated for a random walk on this graph starting at a pixel to end in one of the seed regions. Here, we extend this method to images with uncertain gray values. To this end, we consider the pixel values to be random variables (RVs), thus introducing the notion of stochastic images. We end up with stochastic weights for the graph in random walker segmentation and a stochastic partial differential equation (PDE) that has to be solved. We discretize the RVs and the stochastic PDE by the method of generalized polynomial chaos, combining the recent developments in numerical methods for the discretization of stochastic PDEs and an interactive segmentation algorithm. The resulting algorithm allows for the detection of regions where the segmentation result is highly influenced by the uncertain pixel values. Thus, it gives a reliability estimate for the resulting segmentation, and it furthermore allows determining the probability density function of the segmented object volume.
Enhanced algorithms for stochastic programming
Krishna, A.S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.
Algorithm refinement for the stochastic Burgers' equation
Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org
2007-04-10
In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.
Morton, D.P.
1994-01-01
Handling uncertainty in natural inflow is an important part of a hydroelectric scheduling model. In a stochastic programming formulation, natural inflow may be modeled as a random vector with known distribution, but the size of the resulting mathematical program can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We develop an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of stochastic hydroelectric scheduling problems. Stochastic programming, Hydroelectric scheduling, Large-scale Systems.
A Stochastic Collocation Algorithm for Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)
2003-01-01
This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
Chow, Sy- Miin; Lu, Zhaohua; Zhu, Hongtu; Sherwood, Andrew
2014-01-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation–maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456
Derivation of Randomized Algorithms.
1985-10-01
81] randomized algorithm, 2)AKS deterministic algorithm[AKS83], 3) Column Sorting algorithm [Leighton 83], 4)FLASH SORT algorithm[Reif and Valiant 83...34 ) (for any a). This result immediately implies that r,. _ i- logN with 8 -’ probability > 1 - O(N - *) thus proving our claim. Lemma 3.3 A random SCX of...will be sampleselect, (LN/2J, N). With this modification, quicksort becomes atlgrithm samplesort, (X); if I Al 1 then r rnX; Choose a random subset SCX
Bootstrap performance profiles in stochastic algorithms assessment
Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro
2015-03-10
Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.
Stochastic optimization algorithms for barrier dividend strategies
NASA Astrophysics Data System (ADS)
Yin, G.; Song, Q. S.; Yang, H.
2009-01-01
This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.
An exact accelerated stochastic simulation algorithm.
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-04-14
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 23 power of the number of reaction events in a Galton-Watson process.
Linear-scaling and parallelisable algorithms for stochastic quantum chemistry
NASA Astrophysics Data System (ADS)
Booth, George H.; Smart, Simon D.; Alavi, Ali
2014-07-01
For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.
Stochastic structure formation in random media
NASA Astrophysics Data System (ADS)
Klyatskin, V. I.
2016-01-01
Stochastic structure formation in random media is considered using examples of elementary dynamical systems related to the two-dimensional geophysical fluid dynamics (Gaussian random fields) and to stochastically excited dynamical systems described by partial differential equations (lognormal random fields). In the latter case, spatial structures (clusters) may form with a probability of one in almost every system realization due to rare events happening with vanishing probability. Problems involving stochastic parametric excitation occur in fluid dynamics, magnetohydrodynamics, plasma physics, astrophysics, and radiophysics. A more complicated stochastic problem dealing with anomalous structures on the sea surface (rogue waves) is also considered, where the random Gaussian generation of sea surface roughness is accompanied by parametric excitation.
An exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-04-01
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2/3 power of the number of reaction events in a Galton-Watson process.
An exact accelerated stochastic simulation algorithm
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-01-01
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present “ER-leap” algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2∕3 power of the number of reaction events in a Galton–Watson process. PMID:19368432
Stochastic Evolutionary Algorithms for Planning Robot Paths
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard
2006-01-01
A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.
Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes
NASA Technical Reports Server (NTRS)
Williams Colin P.
1999-01-01
Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.
Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes
NASA Technical Reports Server (NTRS)
Williams Colin P.
1999-01-01
Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.
Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique.
Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep
2015-01-01
In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA.
Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique
Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep
2015-01-01
In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032
A hierarchical exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Orendorff, David; Mjolsness, Eric
2012-12-01
A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.
Perspective: Stochastic algorithms for chemical kinetics
Gillespie, Daniel T.; Hellander, Andreas; Petzold, Linda R.
2013-01-01
We outline our perspective on stochastic chemical kinetics, paying particular attention to numerical simulation algorithms. We first focus on dilute, well-mixed systems, whose description using ordinary differential equations has served as the basis for traditional chemical kinetics for the past 150 years. For such systems, we review the physical and mathematical rationale for a discrete-stochastic approach, and for the approximations that need to be made in order to regain the traditional continuous-deterministic description. We next take note of some of the more promising strategies for dealing stochastically with stiff systems, rare events, and sensitivity analysis. Finally, we review some recent efforts to adapt and extend the discrete-stochastic approach to systems that are not well-mixed. In that currently developing area, we focus mainly on the strategy of subdividing the system into well-mixed subvolumes, and then simulating diffusional transfers of reactant molecules between adjacent subvolumes together with chemical reactions inside the subvolumes. PMID:23656106
Stochastic mirage phenomenon in a random medium.
McDaniel, Austin; Mahalov, Alex
2017-05-15
In the framework of geometric optics, we consider the problem of characterizing the ray trajectory in a random medium with a mean refractive index gradient. Such a gradient results in the mirage phenomenon where an object's observed location is displaced from its actual location. We derive formulas for the mean ray path in both the situation of isotropic stochastic fluctuations and an important anisotropic case. For the isotropic model, the mean squared displacement is also given by a simple formula. Our results could be useful for applications involving the propagation of electromagnetic waves through the atmosphere, where larger-scale mean gradients and smaller-scale stochastic fluctuations are both present.
Stochastic algorithms for Markov models estimation with intermittent missing data.
Deltour, I; Richardson, S; Le Hesran, J Y
1999-06-01
Multistate Markov models are frequently used to characterize disease processes, but their estimation from longitudinal data is often hampered by complex patterns of incompleteness. Two algorithms for estimating Markov chain models in the case of intermittent missing data in longitudinal studies, a stochastic EM algorithm and the Gibbs sampler, are described. The first can be viewed as a random perturbation of the EM algorithm and is appropriate when the M step is straightforward but the E step is computationally burdensome. It leads to a good approximation of the maximum likelihood estimates. The Gibbs sampler is used for a full Bayesian inference. The performances of the two algorithms are illustrated on two simulated data sets. A motivating example concerned with the modelling of the evolution of parasitemia by Plasmodium falciparum (malaria) in a cohort of 105 young children in Cameroon is described and briefly analyzed.
A stochastic approximation algorithm for estimating mixture proportions
NASA Technical Reports Server (NTRS)
Sparra, J.
1976-01-01
A stochastic approximation algorithm for estimating the proportions in a mixture of normal densities is presented. The algorithm is shown to converge to the true proportions in the case of a mixture of two normal densities.
Constant-complexity stochastic simulation algorithm with optimal binning.
Sanft, Kevin R; Othmer, Hans G
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie's Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
A random walk to stochastic diffusion through spreadsheet analysis
NASA Astrophysics Data System (ADS)
Brazzle, Bob
2013-11-01
This paper describes a random walk simulation using a number cube and a lattice of concentric rings of tiled hexagons. At the basic level, it gives beginning students a concrete connection to the concept of stochastic diffusion and related physical quantities. A simple algorithm is presented that can be used to set up spreadsheet files to calculate these simulated quantities and even to "discover" the diffusion equation. Lattices with different geometries in two and three dimensions are also presented. This type of simulation provides fertile ground for independent investigations by all levels of undergraduate students.
A random dynamical systems perspective on stochastic resonance
NASA Astrophysics Data System (ADS)
Cherubini, Anna Maria; Lamb, Jeroen S. W.; Rasmussen, Martin; Sato, Yuzuru
2017-07-01
We study stochastic resonance in an over-damped approximation of the stochastic Duffing oscillator from a random dynamical systems point of view. We analyse this problem in the general framework of random dynamical systems with a nonautonomous forcing. We prove the existence of a unique global attracting random periodic orbit and a stationary periodic measure. We use the stationary periodic measure to define an indicator for the stochastic resonance.
NASA Astrophysics Data System (ADS)
Lampoudi, Sotiria; Gillespie, Dan T.; Petzold, Linda R.
2009-03-01
The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.
Random-order fractional bistable system and its stochastic resonance
NASA Astrophysics Data System (ADS)
Gao, Shilong; Zhang, Li; Liu, Hui; Kan, Bixia
2017-01-01
In this paper, the diffusion motion of Brownian particles in a viscous liquid suffering from stochastic fluctuations of the external environment is modeled as a random-order fractional bistable equation, and as a typical nonlinear dynamic behavior, the stochastic resonance phenomena in this system are investigated. At first, the derivation process of the random-order fractional bistable system is given. In particular, the random-power-law memory is deeply discussed to obtain the physical interpretation of the random-order fractional derivative. Secondly, the stochastic resonance evoked by random-order and external periodic force is mainly studied by numerical simulation. In particular, the frequency shifting phenomena of the periodical output are observed in SR induced by the excitation of the random order. Finally, the stochastic resonance of the system under the double stochastic excitations of the random order and the internal color noise is also investigated.
Stochastic Formal Correctness of Numerical Algorithms
NASA Technical Reports Server (NTRS)
Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick
2009-01-01
We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.
From shape to randomness: A classification of Langevin stochasticity
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Cohen, Morrel H.
2013-01-01
The Langevin equation-perhaps the most elemental stochastic differential equation in the physical sciences-describes the dynamics of a random motion driven simultaneously by a deterministic potential field and by a stochastic white noise. The Langevin equation is, in effect, a mechanism that maps the stochastic white-noise input to a stochastic output: a stationary steady state distribution in the case of potential wells, and a transient extremum distribution in the case of potential gradients. In this paper we explore the degree of randomness of the Langevin equation’s stochastic output, and classify it à la Mandelbrot into five states of randomness ranging from “infra-mild” to “ultra-wild”. We establish closed-form and highly implementable analytic results that determine the randomness of the Langevin equation’s stochastic output-based on the shape of the Langevin equation’s potential field.
Random Lie-point symmetries of stochastic differential equations
NASA Astrophysics Data System (ADS)
Gaeta, Giuseppe; Spadaro, Francesco
2017-05-01
We study the invariance of stochastic differential equations under random diffeomorphisms and establish the determining equations for random Lie-point symmetries of stochastic differential equations, both in Ito and in Stratonovich forms. We also discuss relations with previous results in the literature.
Stochastic reaction-diffusion algorithms for macromolecular crowding
NASA Astrophysics Data System (ADS)
Sturrock, Marc
2016-06-01
Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.
Cao Yang . E-mail: ycao@cs.ucsb.edu; Gillespie, Dan . E-mail: GillespieDT@mailaps.org; Petzold, Linda . E-mail: petzold@engineering.ucsb.edu
2005-07-01
In this paper, we introduce a multiscale stochastic simulation algorithm (MSSA) which makes use of Gillespie's stochastic simulation algorithm (SSA) together with a new stochastic formulation of the partial equilibrium assumption (PEA). This method is much more efficient than SSA alone. It works even with a very small population of fast species. Implementation details are discussed, and an application to the modeling of the heat shock response of E. Coli is presented which demonstrates the excellent efficiency and accuracy obtained with the new method.
Fast stochastic algorithm for simulating evolutionary population dynamics
NASA Astrophysics Data System (ADS)
Tsimring, Lev; Hasty, Jeff; Mather, William
2012-02-01
Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.
Stochastic Semidefinite Programming: Applications and Algorithms
2012-03-03
doi: 2011/09/07 13:38:21 13 TOTAL: 1 Number of Papers published in non peer-reviewed journals: Baha M. Alzalg and K. A. Ariyawansa, Stochastic...symmetric programming over integers. International Conference on Scientific Computing, Las Vegas, Nevada, July 18--21, 2011. Baha M. Alzalg. On recent...Proceeding publications (other than abstracts): PaperReceived Baha M. Alzalg, K. A. Ariyawansa. Stochastic mixed integer second-order cone programming
NASA Astrophysics Data System (ADS)
Kozel, Tomas; Stary, Milos
2016-10-01
Described models are used random forecasting period of flow line with different length. The length is shorter than 1 year. Forecasting period of flow line is transformed to line of managing discharges with same length as forecast. Adaptive managing is used only first value of line of discharges. Stochastic management is worked with dispersion of controlling discharge value. Main advantage stochastic management is fun of possibilities. In article is described construction and evaluation of adaptive stochastic model base on genetic algorithm (classic optimization method). Model was used for stochastic management of open large water reservoir with storage function. Genetic algorithm is used as optimization algorithm. Forecasted inflow is given to model and controlling discharge value is computed by model for chosen probability of controlling discharge value. Model was tested and validated on made up large open water reservoir. Results of stochastic model were evaluated for given probability and were compared to results of same model for 100% forecast (forecasted values are real values). The management of the large open water reservoir with storage function was done logically and with increased sum number of forecast from 300 to 500 the results given by model were better, but another increased from 500 to 750 and 1000 did not get expected improvement. Influence on course of management was tested for different length forecasted inflow and their sum number. Classical optimization model is needed too much time for calculation, therefore stochastic model base on genetic algorithm was used parallel calculation on cluster.
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Xiu, Dongbin
2016-06-21
The focus of the project is the development of mathematical methods and high-performance com- putational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly e cient and scalable numer- ical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
2012-01-01
Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742
Azad, Abdus Salam; Islam, Md Monirul; Chakraborty, Saikat
2017-01-27
The vehicle routing problem (VRP) is a widely studied combinatorial optimization problem. We introduce a variant of the multidepot and periodic VRP (MDPVRP) and propose a heuristic initialized stochastic memetic algorithm to solve it. The main challenge in designing such an algorithm for a large combinatorial optimization problem is to avoid premature convergence by maintaining a balance between exploration and exploitation of the search space. We employ intelligent initialization and stochastic learning to address this challenge. The intelligent initialization technique constructs a population by a mix of random and heuristic generated solutions. The stochastic learning enhances the solutions' quality selectively using simulated annealing with a set of random and heuristic operators. The hybridization of randomness and greediness in the initialization and learning process helps to maintain the balance between exploration and exploitation. Our proposed algorithm has been tested extensively on the existing benchmark problems and outperformed the baseline algorithms by a large margin. We further compared our results with that of the state-of-the-art algorithms working under MDPVRP formulation and found a significant improvement over their results.
Random Walk-Based Solution to Triple Level Stochastic Point Location Problem.
Jiang, Wen; Huang, De-Shuang; Li, Shenghong
2016-06-01
This paper considers the stochastic point location (SPL) problem as a learning mechanism trying to locate a point on a real line via interacting with a random environment. Compared to the stochastic environment in the literatures that confines the learning mechanism to moving in two directions, i.e., left or right, this paper introduces a general triple level stochastic environment which not only tells the learning mechanism to go left or right, but also informs it to stay unmoved. It is easy to understand, as we will prove in this paper, that the environment reported in the previous literatures is just a special case of the triple level environment. And a new learning algorithm, named as random walk-based triple level learning algorithm, is proposed to locate an unknown point under this new type of environment. In order to examine the performance of this algorithm, we divided the triple level SPL problems into four distinguished scenarios by the properties of the unknown point and the stochastic environment, and proved that even under the triple level nonstationary environment and the convergence condition having not being satisfied for some time, which are rarely considered in existing SPL problems, the proposed learning algorithm is still working properly whenever the unknown point is static or evolving with time. Extensive experiments validate our theoretical analyses and demonstrate that the proposed learning algorithms are quite effective and efficient.
Randomized approximate nearest neighbors algorithm
Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir
2011-01-01
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {xj} in , the algorithm attempts to find k nearest neighbors for each of xj, where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k2·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {xj} for an arbitrary point . The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme’s behavior for certain types of distributions of {xj} and illustrate its performance via several numerical examples. PMID:21885738
On stochastic approximation algorithms for classes of PAC learning problems
Rao, N.S.V.; Uppuluri, V.R.R.; Oblow, E.M.
1994-03-01
The classical stochastic approximation methods are shown to yield algorithms to solve several formulations of the PAC learning problem defined on the domain [o,1]{sup d}. Under some assumptions on different ability of the probability measure functions, simple algorithms to solve some PAC learning problems are proposed based on networks of non-polynomial units (e.g. artificial neural networks). Conditions on the sizes of these samples required to ensure the error bounds are derived using martingale inequalities.
Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.
Parameter identification using a creeping-random-search algorithm
NASA Technical Reports Server (NTRS)
Parrish, R. V.
1971-01-01
A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.
Random Walk Analysis in Antagonistic Stochastic Games
2010-07-01
Journal of Mathematical Analysis and Applications , 353...and Applications, an Honorary Volume of Cambridge Scientific Publishers, Journal of Mathematical Analysis and Applications , Mathematical and Computer...J.H. and Ke, H-J., Multilayers in a Modulated Stochastic Game, Journal of Mathematical Analysis and Applications , 353 (2009), 553-565. [8
Stochastic inequality probabilities for adaptively randomized clinical trials.
Cook, John D; Nadarajah, Saralees
2006-06-01
We examine stochastic inequality probabilities of the form P (X > Y) and P (X > max (Y, Z)) where X, Y, and Z are random variables with beta, gamma, or inverse gamma distributions. We discuss the applications of such inequality probabilities to adaptively randomized clinical trials as well as methods for calculating their values.
Random attractor of non-autonomous stochastic Boussinesq lattice system
Zhao, Min Zhou, Shengfan
2015-09-15
In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.
STP: A Stochastic Tunneling Algorithm for Global Optimization
Oblow, E.M.
1999-05-20
A stochastic approach to solving continuous function global optimization problems is presented. It builds on the tunneling approach to deterministic optimization presented by Barhen et al, by combining a series of local descents with stochastic searches. The method uses a rejection-based stochastic procedure to locate new local minima descent regions and a fixed Lipschitz-like constant to reject unpromising regions in the search space, thereby increasing the efficiency of the tunneling process. The algorithm is easily implemented in low-dimensional problems and scales easily to large problems. It is less effective without further heuristics in these latter cases, however. Several improvements to the basic algorithm which make use of approximate estimates of the algorithms parameters for implementation in high-dimensional problems are also discussed. Benchmark results are presented, which show that the algorithm is competitive with the best previously reported global optimization techniques. A successful application of the approach to a large-scale seismology problem of substantial computational complexity using a low-dimensional approximation scheme is also reported.
A Global Optimization Algorithm Using Stochastic Differential Equations.
1985-02-01
Bari (Italy).2Istituto di Fisica , 2 UniversitA di Roma "Tor Vergata", Via Orazio Raimondo, 00173 (La Romanina) Roma (Italy). 3Istituto di Matematica ...accompanying Algorithm. lDipartininto di Matematica , Universita di Bari, 70125 Bar (Italy). Istituto di Fisica , 2a UniversitA di Roim ’"Tor Vergata", Via...Optimization, Stochastic Differential Equations Work Unit Number 5 (Optimization and Large Scale Systems) 6Dipartimento di Matematica , Universita di Bari, 70125
Computing gap free Pareto front approximations with stochastic search algorithms.
Schütze, Oliver; Laumanns, Marco; Tantar, Emilia; Coello, Carlos A Coello; Talbi, El-Ghazali
2010-01-01
Recently, a convergence proof of stochastic search algorithms toward finite size Pareto set approximations of continuous multi-objective optimization problems has been given. The focus was on obtaining a finite approximation that captures the entire solution set in some suitable sense, which was defined by the concept of epsilon-dominance. Though bounds on the quality of the limit approximation-which are entirely determined by the archiving strategy and the value of epsilon-have been obtained, the strategies do not guarantee to obtain a gap free approximation of the Pareto front. That is, such approximations A can reveal gaps in the sense that points f in the Pareto front can exist such that the distance of f to any image point F(a), a epsilon A, is "large." Since such gap free approximations are desirable in certain applications, and the related archiving strategies can be advantageous when memetic strategies are included in the search process, we are aiming in this work for such methods. We present two novel strategies that accomplish this task in the probabilistic sense and under mild assumptions on the stochastic search algorithm. In addition to the convergence proofs, we give some numerical results to visualize the behavior of the different archiving strategies. Finally, we demonstrate the potential for a possible hybridization of a given stochastic search algorithm with a particular local search strategy-multi-objective continuation methods-by showing that the concept of epsilon-dominance can be integrated into this approach in a suitable way.
Decomposition algorithms for stochastic programming on a computational grid.
Linderoth, J.; Wright, S.; Mathematics and Computer Science; Axioma Inc.
2003-01-01
We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems), and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample-average approximations of problems from the literature.
Stochastic deletion-insertion algorithm to construct dense linkage maps.
Wu, Jixiang; Lou, Xiang-Yang; Gonda, Michael
2011-01-01
In this study, we proposed a stochastic deletion-insertion (SDI) algorithm for constructing large-scale linkage maps. This SDI algorithm was compared with three published approximation approaches, the seriation (SER), neighbor mapping (NM), and unidirectional growth (UG) approaches, on the basis of simulated F(2) data with different population sizes, missing genotype rates, and numbers of markers. Simulation results showed that the SDI method had a similar or higher percentage of correct linkage orders than the other three methods. This SDI algorithm was also applied to a real dataset and compared with the other three methods. The total linkage map distance (cM) obtained by the SDI method (148.08 cM) was smaller than the distance obtained by SER (225.52 cM) and two published distances (150.11 cM and 150.38 cM). Since this SDI algorithm is stochastic, a more accurate linkage order can be quickly obtained by repeating this algorithm. Thus, this SDI method, which combines the advantages of accuracy and speed, is an important addition to the current linkage mapping toolkit for constructing improved linkage maps.
NASA Astrophysics Data System (ADS)
Liang, Rui; Schruff, Tobias; Jia, Xiaodong; Schüttrumpf, Holger; Frings, Roy M.
2015-11-01
Porosity as one of the key properties of sediment mixtures is poorly understood. Most of the existing porosity predictors based upon grain size characteristics have been unable to produce satisfying results for fluvial sediment porosity, due to the lack of consideration of other porosity-controlling factors like grain shape and depositional condition. Considering this, a stochastic digital packing algorithm was applied in this work, which provides an innovative way to pack particles of arbitrary shapes and sizes based on digitization of both particles and packing space. The purpose was to test the applicability of this packing algorithm in predicting fluvial sediment porosity by comparing its predictions with outcomes obtained from laboratory measurements. Laboratory samples examined were two natural fluvial sediments from the Rhine River and Kall River (Germany), and commercial glass beads (spheres). All samples were artificially combined into seven grain size distributions: four unimodal distributions and three bimodal distributions. Our study demonstrates that apart from grain size, grain shape also has a clear impact on porosity. The stochastic digital packing algorithm successfully reproduced the measured variations in porosity for the three different particle sources. However, the packing algorithm systematically overpredicted the porosity measured in random dense packing conditions, mainly because the random motion of particles during settling introduced unwanted kinematic sorting and shape effects. The results suggest that the packing algorithm produces loose packing structures, and is useful for trend analysis of packing porosity.
Stochastic Kinetic Monte Carlo algorithms for long-range Hamiltonians
Mason, D R; Rudd, R E; Sutton, A P
2003-10-13
We present a higher order kinetic Monte Carlo methodology suitable to model the evolution of systems in which the transition rates are non- trivial to calculate or in which Monte Carlo moves are likely to be non- productive flicker events. The second order residence time algorithm first introduced by Athenes et al.[1] is rederived from the n-fold way algorithm of Bortz et al.[2] as a fully stochastic algorithm. The second order algorithm can be dynamically called when necessary to eliminate unproductive flickering between a metastable state and its neighbors. An algorithm combining elements of the first order and second order methods is shown to be more efficient, in terms of the number of rate calculations, than the first order or second order methods alone while remaining statistically identical. This efficiency is of prime importance when dealing with computationally expensive rate functions such as those arising from long- range Hamiltonians. Our algorithm has been developed for use when considering simulations of vacancy diffusion under the influence of elastic stress fields. We demonstrate the improved efficiency of the method over that of the n-fold way in simulations of vacancy diffusion in alloys. Our algorithm is seen to be an order of magnitude more efficient than the n-fold way in these simulations. We show that when magnesium is added to an Al-2at.%Cu alloy, this has the effect of trapping vacancies. When trapping occurs, we see that our algorithm performs thousands of events for each rate calculation performed.
Gotway, C.A.; Rutherford, B.M.
1993-09-01
Stochastic simulation has been suggested as a viable method for characterizing the uncertainty associated with the prediction of a nonlinear function of a spatially-varying parameter. Geostatistical simulation algorithms generate realizations of a random field with specified statistical and geostatistical properties. A nonlinear function is evaluated over each realization to obtain an uncertainty distribution of a system response that reflects the spatial variability and uncertainty in the parameter. Crucial management decisions, such as potential regulatory compliance of proposed nuclear waste facilities and optimal allocation of resources in environmental remediation, are based on the resulting system response uncertainty distribution. Many geostatistical simulation algorithms have been developed to generate the random fields, and each algorithm will produce fields with different statistical properties. These different properties will result in different distributions for system response, and potentially, different managerial decisions. The statistical properties of the resulting system response distributions are not completely understood, nor is the ability of the various algorithms to generate response distributions that adequately reflect the associated uncertainty. This paper reviews several of the algorithms available for generating random fields. Algorithms are compared in a designed experiment using seven exhaustive data sets with different statistical and geostatistical properties. For each exhaustive data set, a number of realizations are generated using each simulation algorithm. The realizations are used with each of several deterministic transfer functions to produce a cumulative uncertainty distribution function of a system response. The uncertainty distributions are then compared to the single value obtained from the corresponding exhaustive data set.
NASA Astrophysics Data System (ADS)
Roh, Min K.; Daigle, Bernie J.; Gillespie, Dan T.; Petzold, Linda R.
2011-12-01
In recent years there has been substantial growth in the development of algorithms for characterizing rare events in stochastic biochemical systems. Two such algorithms, the state-dependent weighted stochastic simulation algorithm (swSSA) and the doubly weighted SSA (dwSSA) are extensions of the weighted SSA (wSSA) by H. Kuwahara and I. Mura [J. Chem. Phys. 129, 165101 (2008)], 10.1063/1.2987701. The swSSA substantially reduces estimator variance by implementing system state-dependent importance sampling (IS) parameters, but lacks an automatic parameter identification strategy. In contrast, the dwSSA provides for the automatic determination of state-independent IS parameters, thus it is inefficient for systems whose states vary widely in time. We present a novel modification of the dwSSA—the state-dependent doubly weighted SSA (sdwSSA)—that combines the strengths of the swSSA and the dwSSA without inheriting their weaknesses. The sdwSSA automatically computes state-dependent IS parameters via the multilevel cross-entropy method. We apply the method to three examples: a reversible isomerization process, a yeast polarization model, and a lac operon model. Our results demonstrate that the sdwSSA offers substantial improvements over previous methods in terms of both accuracy and efficiency.
Roh, Min K; Daigle, Bernie J; Gillespie, Dan T; Petzold, Linda R
2011-12-21
In recent years there has been substantial growth in the development of algorithms for characterizing rare events in stochastic biochemical systems. Two such algorithms, the state-dependent weighted stochastic simulation algorithm (swSSA) and the doubly weighted SSA (dwSSA) are extensions of the weighted SSA (wSSA) by H. Kuwahara and I. Mura [J. Chem. Phys. 129, 165101 (2008)]. The swSSA substantially reduces estimator variance by implementing system state-dependent importance sampling (IS) parameters, but lacks an automatic parameter identification strategy. In contrast, the dwSSA provides for the automatic determination of state-independent IS parameters, thus it is inefficient for systems whose states vary widely in time. We present a novel modification of the dwSSA--the state-dependent doubly weighted SSA (sdwSSA)--that combines the strengths of the swSSA and the dwSSA without inheriting their weaknesses. The sdwSSA automatically computes state-dependent IS parameters via the multilevel cross-entropy method. We apply the method to three examples: a reversible isomerization process, a yeast polarization model, and a lac operon model. Our results demonstrate that the sdwSSA offers substantial improvements over previous methods in terms of both accuracy and efficiency.
An adaptive multi-level simulation algorithm for stochastic biological systems.
Lester, C; Yates, C A; Giles, M B; Baker, R E
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the
An adaptive multi-level simulation algorithm for stochastic biological systems
Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the
Stochastic Simulation of Microseisms Using Theory of Conditional Random Fields
NASA Astrophysics Data System (ADS)
Morikawa, H.; Akamatsu, J.; Nishimura, K.; Onoue, K.; Kameda, H.
-We examine the applicability of conditional stochastic simulation to interpretation of microseisms observed on soft soil sediments at Kushiro, Hokkaido, Japan. The theory of conditional random fields developed by Kameda and Morikawa (1994) is used, which allows one to perform interpolation of a Gaussian stochastic time-space field that is conditioned by realized values of time functions specified at some discrete locations. The applicability is examined by a blind test, that is, by comparing a set of simulated seismograms and recorded ones obtained from three-point array observa tions. A test of fitness was performed by means of the sign test. It is concluded that the method is applicable to interpretation of microseisms, and that the wave field of microseisms can be treated as Gaussian random fields both in time and space.
Weighted Flow Algorithms (WFA) for stochastic particle coagulation
DeVille, R.E.L.; Riemer, N.; West, M.
2011-09-20
Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.
Quantum random-walk search algorithm
Shenvi, Neil; Whaley, K. Birgitta; Kempe, Julia
2003-05-01
Quantum random walks on graphs have been shown to display many interesting properties, including exponentially fast hitting times when compared with their classical counterparts. However, it is still unclear how to use these novel properties to gain an algorithmic speedup over classical algorithms. In this paper, we present a quantum search algorithm based on the quantum random-walk architecture that provides such a speedup. It will be shown that this algorithm performs an oracle search on a database of N items with O({radical}(N)) calls to the oracle, yielding a speedup similar to other quantum search algorithms. It appears that the quantum random-walk formulation has considerable flexibility, presenting interesting opportunities for development of other, possibly novel quantum algorithms.
An integrated optimal control algorithm for discrete-time nonlinear stochastic system
NASA Astrophysics Data System (ADS)
Kek, Sie Long; Lay Teo, Kok; Mohd Ismail, A. A.
2010-12-01
Consider a discrete-time nonlinear system with random disturbances appearing in the real plant and the output channel where the randomly perturbed output is measurable. An iterative procedure based on the linear quadratic Gaussian optimal control model is developed for solving the optimal control of this stochastic system. The optimal state estimate provided by Kalman filtering theory and the optimal control law obtained from the linear quadratic regulator problem are then integrated into the dynamic integrated system optimisation and parameter estimation algorithm. The iterative solutions of the optimal control problem for the model obtained converge to the solution of the original optimal control problem of the discrete-time nonlinear system, despite model-reality differences, when the convergence is achieved. An illustrative example is solved using the method proposed. The results obtained show the effectiveness of the algorithm proposed.
NASA Astrophysics Data System (ADS)
Cottrill, Gerald C.
A hybrid numerical algorithm combining the Gauss Pseudospectral Method (GPM) with a Generalized Polynomial Chaos (gPC) method to solve nonlinear stochastic optimal control problems with constraint uncertainties is presented. TheGPM and gPC have been shown to be spectrally accurate numerical methods for solving deterministic optimal control problems and stochastic differential equations, respectively. The gPC uses collocation nodes to sample the random space, which are then inserted into the differential equations and solved by applying standard differential equation methods. The resulting set of deterministic solutions is used to characterize the distribution of the solution by constructing a polynomial representation of the output as a function of uncertain parameters. Optimal control problems are especially challenging to solve since they often include path constraints, bounded controls, boundary conditions, and require solutions that minimize a cost functional. Adding random parameters can make these problems even more challenging. The hybrid algorithm presented in this dissertation is the first time the GPM and gPC algorithms have been combined to solve optimal control problems with random parameters. Using the GPM in the gPC construct provides minimum cost deterministic solutions used in stochastic computations that meet path, control, and boundary constraints, thus extending current gPC methods to be applicable to stochastic optimal control problems. The hybrid GPM-gPC algorithm was applied to two concept demonstration problems: a nonlinear optimal control problem with multiplicative uncertain elements and a trajectory optimization problem simulating an aircraft flying through a threat field where exact locations of the threats are unknown. The results show that the expected value, variance, and covariance statistics of the polynomial output function approximations of the state, control, cost, and terminal time variables agree with Monte-Carlo simulation
Gu, M G; Kong, F H
1998-06-23
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Stochastic calculus for uncoupled continuous-time random walks
NASA Astrophysics Data System (ADS)
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.
2009-06-01
The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy α -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.
Stochastic calculus for uncoupled continuous-time random walks.
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L
2009-06-01
The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy alpha -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.
Random search optimization based on genetic algorithm and discriminant function
NASA Technical Reports Server (NTRS)
Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.
1990-01-01
The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.
Random search optimization based on genetic algorithm and discriminant function
NASA Technical Reports Server (NTRS)
Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.
1990-01-01
The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.
Random attractors for the stochastic coupled fractional Ginzburg-Landau equation with additive noise
Shu, Ji E-mail: 530282863@qq.com; Li, Ping E-mail: 530282863@qq.com; Zhang, Jia; Liao, Ou
2015-10-15
This paper is concerned with the stochastic coupled fractional Ginzburg-Landau equation with additive noise. We first transform the stochastic coupled fractional Ginzburg-Landau equation into random equations whose solutions generate a random dynamical system. Then we prove the existence of random attractor for random dynamical system.
Yang, Hua; Jiang, Feng
2014-01-01
This paper is concerned with the convergence of stochastic θ-methods for stochastic pantograph equations with Poisson-driven jumps of random magnitude. The strong order of the convergence of the numerical method is given, and the convergence of the numerical method is obtained. Some earlier results are generalized and improved. PMID:24672340
Stochastic optimization algorithm for inverse modeling of air pollution
NASA Astrophysics Data System (ADS)
Yeo, Kyongmin; Hwang, Youngdeok; Liu, Xiao; Kalagnanam, Jayant
2016-11-01
A stochastic optimization algorithm to estimate a smooth source function from a limited number of observations is proposed in the context of air pollution, where the source-receptor relation is given by an advection-diffusion equation. First, a smooth source function is approximated by a set of Gaussian kernels on a rectangular mesh system. Then, the generalized polynomial chaos (gPC) expansion is used to represent the model uncertainty due to the choice of the mesh system. It is shown that the convolution of gPC basis and the Gaussian kernel provides hierarchical basis functions for a spectral function estimation. The spectral inverse model is formulated as a stochastic optimization problem. We propose a regularization strategy based on the hierarchical nature of the basis polynomials. It is shown that the spectral inverse model is capable of providing a good estimate of the source function even when the number of unknown parameters (m) is much larger the number of data (n), m/n > 50.
Stochastic error whitening algorithm for linear filter estimation with noisy data.
Rao, Yadunandana N; Erdogmus, Deniz; Rao, Geetha Y; Principe, Jose C
2003-01-01
Mean squared error (MSE) has been the most widely used tool to solve the linear filter estimation or system identification problem. However, MSE gives biased results when the input signals are noisy. This paper presents a novel stochastic gradient algorithm based on the recently proposed error whitening criterion (EWC) to tackle the problem of linear filter estimation in the presence of additive white disturbances. We will briefly motivate the theory behind the new criterion and derive an online stochastic gradient algorithm. Convergence proof of the stochastic gradient algorithm is derived making mild assumptions. Further, we will propose some extensions to the stochastic gradient algorithm to ensure faster, step-size independent convergence. We will perform extensive simulations and compare the results with MSE as well as total-least squares in a parameter estimation problem. The stochastic EWC algorithm has many potential applications. We will use this in designing robust inverse controllers with noisy data.
Modeling Langmuir isotherms with the Gillespie stochastic algorithm.
Epstein, J; Michael, J; Mandona, C; Marques, F; Dias-Cabral, A C; Thrash, M
2015-02-06
The overall goal of this work is to develop a robust modeling approach that is capable of simulating single and multicomponent isotherms for biological molecules interacting with a variety of adsorbents. Provided the ratio between the forward and reverse adsorption/desorption constants is known, the Gillespie stochastic algorithm has been shown to be effective in modeling isotherms consistent with the Langmuir theory and uptake curves that fall outside this traditional approach. We have used this method to model protein adsorption on ion-exchange adsorbents, hydrophobic interactive adsorbents and ice crystals. In our latest efforts we have applied the Gillespie approach to simulate binary and ternary isotherms from the literature involving gas-solid adsorption applications. In each case the model is consistent with the experimental results presented.
Hierarchical Stochastic Simulation Algorithm for SBML Models of Genetic Circuits
Watanabe, Leandro H.; Myers, Chris J.
2014-01-01
This paper describes a hierarchical stochastic simulation algorithm, which has been implemented within iBioSim, a tool used to model, analyze, and visualize genetic circuits. Many biological analysis tools flatten out hierarchy before simulation, but there are many disadvantages associated with this approach. First, the memory required to represent the model can quickly expand in the process. Second, the flattening process is computationally expensive. Finally, when modeling a dynamic cellular population within iBioSim, inlining the hierarchy of the model is inefficient since models must grow dynamically over time. This paper discusses a new approach to handle hierarchy on the fly to make the tool faster and more memory-efficient. This approach yields significant performance improvements as compared to the former flat analysis method. PMID:25506588
Randomized Algorithms for Matrices and Data
NASA Astrophysics Data System (ADS)
Mahoney, Michael W.
2012-03-01
This chapter reviews recent work on randomized matrix algorithms. By “randomized matrix algorithms,” we refer to a class of recently developed random sampling and random projection algorithms for ubiquitous linear algebra problems such as least-squares (LS) regression and low-rank matrix approximation. These developments have been driven by applications in large-scale data analysis—applications which place very different demands on matrices than traditional scientific computing applications. Thus, in this review, we will focus on highlighting the simplicity and generality of several core ideas that underlie the usefulness of these randomized algorithms in scientific applications such as genetics (where these algorithms have already been applied) and astronomy (where, hopefully, in part due to this review they will soon be applied). The work we will review here had its origins within theoretical computer science (TCS). An important feature in the use of randomized algorithms in TCS more generally is that one must identify and then algorithmically deal with relevant “nonuniformity structure” in the data. For the randomized matrix algorithms to be reviewed here and that have proven useful recently in numerical linear algebra (NLA) and large-scale data analysis applications, the relevant nonuniformity structure is defined by the so-called statistical leverage scores. Defined more precisely below, these leverage scores are basically the diagonal elements of the projection matrix onto the dominant part of the spectrum of the input matrix. As such, they have a long history in statistical data analysis, where they have been used for outlier detection in regression diagnostics. More generally, these scores often have a very natural interpretation in terms of the data and processes generating the data. For example, they can be interpreted in terms of the leverage or influence that a given data point has on, say, the best low-rank matrix approximation; and this
Application of stochastic processes in random growth and evolutionary dynamics
NASA Astrophysics Data System (ADS)
Oikonomou, Panagiotis
We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically
Random migration processes between two stochastic epidemic centers.
Sazonov, Igor; Kelbert, Mark; Gravenor, Michael B
2016-04-01
We consider the epidemic dynamics in stochastic interacting population centers coupled by random migration. Both the epidemic and the migration processes are modeled by Markov chains. We derive explicit formulae for the probability distribution of the migration process, and explore the dependence of outbreak patterns on initial parameters, population sizes and coupling parameters, using analytical and numerical methods. We show the importance of considering the movement of resident and visitor individuals separately. The mean field approximation for a general migration process is derived and an approximate method that allows the computation of statistical moments for networks with highly populated centers is proposed and tested numerically. Copyright © 2016 Elsevier Inc. All rights reserved.
Implementing Quality Control on a Random Number Stream to Improve a Stochastic Weather Generator
USDA-ARS?s Scientific Manuscript database
For decades stochastic modelers have used computerized random number generators to produce random numeric sequences fitting a specified statistical distribution. Unfortunately, none of the random number generators we tested satisfactorily produced the target distribution. The result is generated d...
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Brissette, François P.; Poulin, Annie; Côté, Pascal; Martel, Jean-Luc
2014-05-01
The process of hydrological model parameter calibration is routinely performed with the help of stochastic optimization algorithms. Many such algorithms have been created and they sometimes provide varying levels of performance (as measured by an efficiency metric such as Nash-Sutcliffe). This is because each algorithm is better suited for one type of optimization problem rather than another. This research project's aim was twofold. First, it was sought upon to find various features in the calibration problem fitness landscapes to map the encountered problem types to the best possible optimization algorithm. Second, the optimal number of model evaluations in order to minimize resources usage and maximize overall model quality was investigated. A total of five stochastic optimization algorithms (SCE-UA, CMAES, DDS, PSO and ASA) were used to calibrate four lumped hydrological models (GR4J, HSAMI, HMETS and MOHYSE) on 421 basins from the US MOPEX database. Each of these combinations was performed using three objective functions (Log(RMSE), NSE, and a metric combining NSE, RMSE and BIAS) to add sufficient diversity to the fitness landscapes. Each run was performed 30 times for statistical analysis. With every parameter set tested during the calibration process, the validation value was taken on a separate period. It was then possible to outline the calibration skill versus the validation skill for the different algorithms. Fitness landscapes were characterized by various metrics, such as the dispersion metric, the mean distance between random points and their respective local minima (found through simple hill-climbing algorithms) and the mean distance between the local minima and the best local optimum found. These metrics were then compared to the calibration score of the various optimization algorithms. Preliminary results tend to show that fitness landscapes presenting a globally convergent structure are more prevalent than other types of landscapes in this
Efficient stochastic Galerkin methods for random diffusion equations
Xiu Dongbin Shen Jie
2009-02-01
We discuss in this paper efficient solvers for stochastic diffusion equations in random media. We employ generalized polynomial chaos (gPC) expansion to express the solution in a convergent series and obtain a set of deterministic equations for the expansion coefficients by Galerkin projection. Although the resulting system of diffusion equations are coupled, we show that one can construct fast numerical methods to solve them in a decoupled fashion. The methods are based on separation of the diagonal terms and off-diagonal terms in the matrix of the Galerkin system. We examine properties of this matrix and show that the proposed method is unconditionally stable for unsteady problems and convergent for steady problems with a convergent rate independent of discretization parameters. Numerical examples are provided, for both steady and unsteady random diffusions, to support the analysis.
A stochastic maximum principle for backward control systems with random default time
NASA Astrophysics Data System (ADS)
Shen, Yang; Kuen Siu, Tak
2013-05-01
This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.
Atomic clock prediction algorithm: random pursuit strategy
NASA Astrophysics Data System (ADS)
Wang, Yuzhuo; Chen, Yu; Gao, Yuan; Xu, Qinghua; Zhang, Aimin
2017-06-01
The present study proposes a novel prediction algorithm named ‘random pursuit strategy’. It contains a predictor ensemble consisting of several predictors, each operating in a subspace of the original sample data space. The prediction is calculated by combining the outputs of the individual predictors using a weighted average. The frequency data of cesium clocks and hydrogen masers was predicted using the Kalman filter predictor and random pursuit strategy. The proposed algorithm demonstrates preferable capability in some cases. This could have beneficial applications for system controls in some areas.
NASA Astrophysics Data System (ADS)
Finney, Greg A.; Persons, Christopher M.; Henning, Stephan; Hazen, Jessie; Whitley, Daniel
2014-06-01
IERUS Technologies, Inc. and the University of Alabama in Huntsville have partnered to perform characterization and development of algorithms and hardware for adaptive optics. To date the algorithm work has focused on implementation of the stochastic parallel gradient descent (SPGD) algorithm. SPGD is a metric-based approach in which a scalar metric is optimized by taking random perturbative steps for many actuators simultaneously. This approach scales to systems with a large number of actuators while maintaining bandwidth, while conventional methods are negatively impacted by the very large matrix multiplications that are required. The metric approach enables the use of higher speed sensors with fewer (or even a single) sensing element(s), enabling a higher control bandwidth. Furthermore, the SPGD algorithm is model-free, and thus is not strongly impacted by the presence of nonlinearities which degrade the performance of conventional phase reconstruction methods. Finally, for high energy laser applications, SPGD can be performed using the primary laser beam without the need for an additional beacon laser. The conventional SPGD algorithm was modified to use an adaptive gain to improve convergence while maintaining low steady state error. Results from laboratory experiments using phase plates as atmosphere surrogates will be presented, demonstrating areas in which the adaptive gain yields better performance and areas which require further investigation.
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; ...
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; Watson, Jean -Paul; Wets, Roger J.-B.; Woodruff, David L.
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
A genetic algorithm for the arrival probability in the stochastic networks.
Shirdel, Gholam H; Abdolhosseinzadeh, Mohsen
2016-01-01
A genetic algorithm is presented to find the arrival probability in a directed acyclic network with stochastic parameters, that gives more reliability of transmission flow in delay sensitive networks. Some sub-networks are extracted from the original network, and a connection is established between the original source node and the original destination node by randomly selecting some local source and the local destination nodes. The connections are sorted according to their arrival probabilities and the best established connection is determined with the maximum arrival probability. There is an established discrete time Markov chain in the network. The arrival probability to a given destination node from a given source node in the network is defined as the multi-step transition probability of the absorbtion in the final state of the established Markov chain. The proposed method is applicable on large stochastic networks, where the previous methods were not. The effectiveness of the proposed method is illustrated by some numerical results with perfect fitness values of the proposed genetic algorithm.
Non-divergence of stochastic discrete time algorithms for PCA neural networks.
Lv, Jian Cheng; Yi, Zhang; Li, Yunxia
2015-02-01
Learning algorithms play an important role in the practical application of neural networks based on principal component analysis, often determining the success, or otherwise, of these applications. These algorithms cannot be divergent, but it is very difficult to directly study their convergence properties, because they are described by stochastic discrete time (SDT) algorithms. This brief analyzes the original SDT algorithms directly, and derives some invariant sets that guarantee the nondivergence of these algorithms in a stochastic environment by selecting proper learning parameters. Our theoretical results are verified by a series of simulation examples.
Selecting materialized views using random algorithm
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi
2007-04-01
The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.
2004-09-01
optimization problems with stochastic objective functions and a mixture of design variable types. The generalized pattern search (GPS) class of algorithms is...provide computational enhancements to the basic algorithm. Im- plementation alternatives include the use of modern R&S procedures designed to provide...83 vii Page 4.3 Termination Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.4 Algorithm Design
Stochastic optimization with randomized smoothing for image registration.
Sun, Wei; Poot, Dirk H J; Smal, Ihor; Yang, Xuan; Niessen, Wiro J; Klein, Stefan
2017-01-01
Image registration is typically formulated as an optimization process, which aims to find the optimal transformation parameters of a given transformation model by minimizing a cost function. Local minima may exist in the optimization landscape, which could hamper the optimization process. To eliminate local minima, smoothing the cost function would be desirable. In this paper, we investigate the use of a randomized smoothing (RS) technique for stochastic gradient descent (SGD) optimization, to effectively smooth the cost function. In this approach, Gaussian noise is added to the transformation parameters prior to computing the cost function gradient in each iteration of the SGD optimizer. The approach is suitable for both rigid and nonrigid registrations. Experiments on synthetic images, cell images, public CT lung data, and public MR brain data demonstrate the effectiveness of the novel RS technique in terms of registration accuracy and robustness.
Convergence rates of finite difference stochastic approximation algorithms part I: general sampling
NASA Astrophysics Data System (ADS)
Dai, Liyi
2016-05-01
Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. The analysis is carried out under a general framework covering a wide range of updating scenarios. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences.
NASA Technical Reports Server (NTRS)
Miamee, A. G.
1988-01-01
It is shown that the algorithms for determining the generating function and prediction error matrix of multivariate stationary stochastic processes developed by Wiener and Masani (1957), and later by Masani (1960) will work in some more general setting.
NASA Astrophysics Data System (ADS)
Staber, Brian; Guilleminot, Johann
2017-06-01
In this Note, we present a unified approach to the information-theoretic modeling and simulation of a class of elasticity random fields, for all physical symmetry classes. The new stochastic representation builds upon a Walpole tensor decomposition, which allows the maximum entropy constraints to be decoupled in accordance with the tensor (sub)algebras associated with the class under consideration. In contrast to previous works where the construction was carried out on the scalar-valued Walpole coordinates, the proposed strategy involves both matrix-valued and scalar-valued random fields. This enables, in particular, the construction of a generation algorithm based on a memoryless transformation, hence improving the computational efficiency of the framework. Two applications involving weak symmetries and sampling over spherical and cylindrical geometries are subsequently provided. These numerical experiments are relevant to the modeling of elastic interphases in nanocomposites, as well as to the simulation of spatially dependent wood properties for instance.
Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2017-01-01
Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.
NASA Astrophysics Data System (ADS)
Kiesewetter, Simon; Drummond, Peter D.
2017-03-01
A variance reduction method for stochastic integration of Fokker-Planck equations is derived. This unifies the cumulant hierarchy and stochastic equation approaches to obtaining moments, giving a performance superior to either. We show that the brute force method of reducing sampling error by just using more trajectories in a sampled stochastic equation is not the best approach. The alternative of using a hierarchy of moment equations is also not optimal, as it may converge to erroneous answers. Instead, through Bayesian conditioning of the stochastic noise on the requirement that moment equations are satisfied, we obtain improved results with reduced sampling errors for a given number of stochastic trajectories. The method used here converges faster in time-step than Ito-Euler algorithms. This parallel optimized sampling (POS) algorithm is illustrated by several examples, including a bistable nonlinear oscillator case where moment hierarchies fail to converge.
NASA Astrophysics Data System (ADS)
Miller, J. A.; Piscicelli, M.
2005-12-01
The momentum diffusion or Fokker-Planck operator describes, at least approximately, the evolution of a distribution of particles interacting with a collection of scattering centers. The interactions can range from Coulomb collisions with particles of the same or another species, to resonant interactions with linear plasma waves, to nonresonant collisions with randomly-moving large-scale (compared to the particle gyroradius) magnetic inhomogeneities. Consequently, this operator is a common feature in descriptions of particle transport and stochastic acceleration by electromagnetic turbulence in a wide variety of astrophysical and space plasma situations. An analytical solution of a kinetic equation involving this operator is intractable in practical instances, and hence numerical solutions must be employed. We demonstrate how to transform the kinetic equation into an equivalent system of Stratonovich Stochastic Differential Equations, and present a high-order adaptive Runge-Kutta algorithm for their solution. This technique can provide accurate solutions of a kinetic equation over long timescales, and is easily adapted to take into account nonstochastic processes. This work was supported by NASA grant NAG5-12794
A new stochastic algorithm for inversion of dust aerosol size distribution
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Yang, Ma-ying
2015-08-01
Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.
The stochastic link equilibrium strategy and algorithm for flow assignment in communication networks
NASA Astrophysics Data System (ADS)
Tao, Yang; Zhou, Xia
2005-11-01
Based on the mature user equilibrium (UE) theory in transportation field as well as the similarity of network flow between transportation and communication, in this paper, the user equilibrium theory was applied to communication networks, and how to apply the stochastic user equilibrium (SUE) to flow assigning in generalized communication networks was further studied. The stochastic link equilibrium (SLE) flow assignment strategy was proposed in this paper, the algorithm of SLE flow assignment was also provided. Both analyses and simulation based on the given algorithm proved that the optimal flow assignment in networks can be achieved by using this algorithm.
McDonnell, Mark D; Mohan, Ashutosh; Stricker, Christian
2013-01-01
The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential (AP) at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of AP arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic AP, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms.
McDonnell, Mark D.; Mohan, Ashutosh; Stricker, Christian
2013-01-01
The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential (AP) at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of AP arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic AP, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms. PMID:23675343
Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems
NASA Astrophysics Data System (ADS)
Mahdi Alavi, S. M.; Saif, Mehrdad
2013-12-01
This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.
D-leaping: Accelerating stochastic simulation algorithms for reactions with delays
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2009-09-01
We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Li, Xinyang
2011-04-01
Optimizing the system performance metric directly is an important method for correcting wavefront aberrations in an adaptive optics (AO) system where wavefront sensing methods are unavailable or ineffective. An appropriate "Deformable Mirror" control algorithm is the key to successful wavefront correction. Based on several stochastic parallel optimization control algorithms, an adaptive optics system with a 61-element Deformable Mirror (DM) is simulated. Genetic Algorithm (GA), Stochastic Parallel Gradient Descent (SPGD), Simulated Annealing (SA) and Algorithm Of Pattern Extraction (Alopex) are compared in convergence speed and correction capability. The results show that all these algorithms have the ability to correct for atmospheric turbulence. Compared with least squares fitting, they almost obtain the best correction achievable for the 61-element DM. SA is the fastest and GA is the slowest in these algorithms. The number of perturbation by GA is almost 20 times larger than that of SA, 15 times larger than SPGD and 9 times larger than Alopex.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev
2017-01-01
In present paper, we study underlying mechanisms of the stochastic excitability in glycolysis on the example of the model proposed by Sel'kov. A stochastic variant of this model with the randomly forced influx of the substrate is considered. Our analysis is based on the stochastic sensitivity function technique. A detailed parametric analysis of the stochastic sensitivity of attractors is carried out. A range of parameters where the stochastic model is highly sensitive to noise is determined, and a supersensitive Canard cycle is found. Phenomena of the stochastic excitability and variability of forced equilibria and cycles are demonstrated and studied. It is shown that in the zone of Canard cycles noise-induced chaos is observed.
Evaluation of a Geothermal Prospect Using a Stochastic Joint Inversion Algorithm
NASA Astrophysics Data System (ADS)
Tompson, A. F.; Mellors, R. J.; Ramirez, A.; Dyer, K.; Yang, X.; Trainor-Guitton, W.; Wagoner, J. L.
2013-12-01
A stochastic joint inverse algorithm to analyze diverse geophysical and hydrologic data for a geothermal prospect is developed. The purpose is to improve prospect evaluation by finding an ensemble of hydrothermal flow models that are most consistent with multiple types of data sets. The staged approach combines Bayesian inference within a Markov Chain Monte Carlo (MCMC) global search algorithm. The method is highly flexible and capable of accommodating multiple and diverse datasets as a means to maximize the utility of all available data to understand system behavior. An initial application is made at a geothermal prospect located near Superstition Mountain in the western Salton Trough in California. Readily available data include three thermal gradient exploration boreholes, borehole resistivity logs, magnetotelluric and gravity geophysical surveys, surface heat flux measurements, and other nearby hydrologic and geologic information. Initial estimates of uncertainty in structural or parametric characteristics of the prospect are used to drive large numbers of simulations of hydrothermal fluid flow and related geophysical processes using random realizations of the conceptual geothermal system. Uncertainty in the results is represented within a ranked subset of model realizations that best match all available data within a specified norm or tolerance. Statistical (posterior) characteristics of these solutions reflect reductions in the perceived (prior) uncertainties. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-641792.
A Simple Genetic Algorithm for Calibration of Stochastic Rock Discontinuity Networks
NASA Astrophysics Data System (ADS)
Jimenez, R.; Jurado-Piña, R.
2012-07-01
We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models
NASA Technical Reports Server (NTRS)
Mjoisness, Eric; Castano, Rebecca; Gray, Alexander
1999-01-01
We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.
Modeling Signal Transduction Networks: A comparison of two Stochastic Kinetic Simulation Algorithms
Pettigrew, Michel F.; Resat, Haluk
2005-09-15
Simulations of a scalable four compartment reaction model based on the well known epidermal growth factor receptor (EGFR) signal transduction system are used to compare two stochastic algorithms ? StochSim and the Gibson-Gillespie. It is concluded that the Gibson-Gillespie is the algorithm of choice for most realistic cases with the possible exception of signal transduction networks characterized by a moderate number (< 100) of complex types, each with a very small population, but with a high degree of connectivity amongst the complex types. Keywords: Signal transduction networks, Stochastic simulation, StochSim, Gillespie
Fluorescence microscopy image noise reduction using a stochastically-connected random field model
Haider, S. A.; Cameron, A.; Siva, P.; Lui, D.; Shafiee, M. J.; Boroomand, A.; Haider, N.; Wong, A.
2016-01-01
Fluorescence microscopy is an essential part of a biologist’s toolkit, allowing assaying of many parameters like subcellular localization of proteins, changes in cytoskeletal dynamics, protein-protein interactions, and the concentration of specific cellular ions. A fundamental challenge with using fluorescence microscopy is the presence of noise. This study introduces a novel approach to reducing noise in fluorescence microscopy images. The noise reduction problem is posed as a Maximum A Posteriori estimation problem, and solved using a novel random field model called stochastically-connected random field (SRF), which combines random graph and field theory. Experimental results using synthetic and real fluorescence microscopy data show the proposed approach achieving strong noise reduction performance when compared to several other noise reduction algorithms, using quantitative metrics. The proposed SRF approach was able to achieve strong performance in terms of signal-to-noise ratio in the synthetic results, high signal to noise ratio and contrast to noise ratio in the real fluorescence microscopy data results, and was able to maintain cell structure and subtle details while reducing background and intra-cellular noise. PMID:26884148
A parallel algorithm for random searches
NASA Astrophysics Data System (ADS)
Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.
2015-11-01
We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.
Liu, Qunfeng; Chen, Wei-Neng; Deng, Jeremiah D; Gu, Tianlong; Zhang, Huaxiang; Yu, Zhengtao; Zhang, Jun
2017-02-07
The popular performance profiles and data profiles for benchmarking deterministic optimization algorithms are extended to benchmark stochastic algorithms for global optimization problems. A general confidence interval is employed to replace the significance test, which is popular in traditional benchmarking methods but suffering more and more criticisms. Through computing confidence bounds of the general confidence interval and visualizing them with performance profiles and (or) data profiles, our benchmarking method can be used to compare stochastic optimization algorithms by graphs. Compared with traditional benchmarking methods, our method is synthetic statistically and therefore is suitable for large sets of benchmark problems. Compared with some sample-mean-based benchmarking methods, e.g., the method adopted in black-box-optimization-benchmarking workshop/competition, our method considers not only sample means but also sample variances. The most important property of our method is that it is a distribution-free method, i.e., it does not depend on any distribution assumption of the population. This makes it a promising benchmarking method for stochastic optimization algorithms. Some examples are provided to illustrate how to use our method to compare stochastic optimization algorithms.
NASA Astrophysics Data System (ADS)
Yang, Yongge; Xu, Wei; Sun, Yahui; Xiao, Yanwen
2017-01-01
This paper aims to investigate the stochastic bifurcations in the nonlinear vibroimpact system with fractional derivative under random excitation. Firstly, the original stochastic vibroimpact system with fractional derivative is transformed into equivalent stochastic vibroimpact system without fractional derivative. Then, the non-smooth transformation and stochastic averaging method are used to obtain the analytical solutions of the equivalent stochastic system. At last, in order to verify the effectiveness of the above mentioned approach, the van der Pol vibroimpact system with fractional derivative is worked out in detail. A very satisfactory agreement can be found between the analytical results and the numerical results. An interesting phenomenon we found in this paper is that the fractional order and fractional coefficient of the stochastic van der Pol vibroimpact system can induce the occurrence of stochastic P-bifurcation. To the best of authors' knowledge, the stochastic P-bifurcation phenomena induced by fractional order and fractional coefficient have not been found in the present available literature which studies the dynamical behaviors of stochastic system with fractional derivative under Gaussian white noise excitation.
Some Randomized Algorithms for Convex Quadratic Programming
Goldbach, R.
1999-01-15
We adapt some randomized algorithms of Clarkson [3] for linear programming to the framework of so-called LP-type problems, which was introduced by Sharir and Welzl [10]. This framework is quite general and allows a unified and elegant presentation and analysis. We also show that LP-type problems include minimization of a convex quadratic function subject to convex quadratic constraints as a special case, for which the algorithms can be implemented efficiently, if only linear constraints are present. We show that the expected running times depend only linearly on the number of constraints, and illustrate this by some numerical results. Even though the framework of LP-type problems may appear rather abstract at first, application of the methods considered in this paper to a given problem of that type is easy and efficient. Moreover, our proofs are in fact rather simple, since many technical details of more explicit problem representations are handled in a uniform manner by our approach. In particular, we do not assume boundedness of the feasible set as required in related methods.
Stochastic Mechanisms of Cell Fate Specification that Yield Random or Robust Outcomes
Johnston, Robert J.; Desplan, Claude
2011-01-01
Although cell fate specification is tightly controlled to yield highly reproducible results and avoid extreme variation, developmental programs often incorporate stochastic mechanisms to diversify cell types. Stochastic specification phenomena are observed in a wide range of species and an assorted set of developmental contexts. In bacteria, stochastic mechanisms are utilized to generate transient subpopulations capable of surviving adverse environmental conditions. In vertebrate, insect, and worm nervous systems, stochastic fate choices are used to increase the repertoire of sensory and motor neuron subtypes. Random fate choices are also integrated into developmental programs controlling organogenesis. Although stochastic decisions can be maintained to produce a mosaic of fates within a population of cells, they can also be compensated for or directed to yield robust and reproducible outcomes. PMID:20590453
Scosyrev, Emil
2013-01-01
In randomized trials with imperfect compliance, it is sometimes recommended to supplement the intention-to-treat estimate with an instrumental variable (IV) estimate, which is consistent for the effect of treatment administration in those subjects who would get treated if randomized to treatment and would not get treated if randomized to control. The IV estimation however has been criticized for its reliance on simultaneous existence of complementary "fatalistic" compliance states. The objective of the present paper is to identify some sufficient conditions for consistent estimation of treatment effects in randomized trials with stochastic compliance. It is shown that in the stochastic framework, the classical IV estimator is generally inconsistent for the population-averaged treatment effect. However, even under stochastic compliance, with certain common experimental designs the IV estimator and a simple alternative estimator can be used for consistent estimation of the effect of treatment administration in well-defined and identifiable subsets of the study population.
QUANTITATIVE MAGNETIC RESONANCE IMAGE ANALYSIS VIA THE EM ALGORITHM WITH STOCHASTIC VARIATION.
Zhang, Xiaoxi; Johnson, Timothy D; Little, Roderick J A; Cao, Yue
2008-01-01
Quantitative Magnetic Resonance Imaging (qMRI) provides researchers insight into pathological and physiological alterations of living tissue, with the help of which, researchers hope to predict (local) therapeutic efficacy early and determine optimal treatment schedule. However, the analysis of qMRI has been limited to ad-hoc heuristic methods. Our research provides a powerful statistical framework for image analysis and sheds light on future localized adaptive treatment regimes tailored to the individual's response. We assume in an imperfect world we only observe a blurred and noisy version of the underlying pathological/physiological changes via qMRI, due to measurement errors or unpredictable influences. We use a hidden Markov Random Field to model the spatial dependence in the data and develop a maximum likelihood approach via the Expectation-Maximization algorithm with stochastic variation. An important improvement over previous work is the assessment of variability in parameter estimation, which is the valid basis for statistical inference. More importantly, we focus on the expected changes rather than image segmentation. Our research has shown that the approach is powerful in both simulation studies and on a real dataset, while quite robust in the presence of some model assumption violations.
Hybrid discrete/continuum algorithms for stochastic reaction networks
Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; ...
2014-10-22
Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discretemore » and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.« less
Hybrid discrete/continuum algorithms for stochastic reaction networks
Safta, Cosmin Sargsyan, Khachik Debusschere, Bert Najm, Habib N.
2015-01-15
Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker–Planck equation. The Fokker–Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components. The numerical construction at the interface between the discrete and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. The performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.
Hybrid discrete/continuum algorithms for stochastic reaction networks
Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; Najm, Habib N.
2014-10-22
Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discrete and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.
Roussel, Marc R; Zhu, Rui
2006-12-08
The quantitative modeling of gene transcription and translation requires a treatment of two key features: stochastic fluctuations due to the limited copy numbers of key molecules (genes, RNA polymerases, ribosomes), and delayed output due to the time required for biopolymer synthesis. Recently proposed algorithms allow for efficient simulations of such systems. However, it is critical to know whether the results of delay stochastic simulations agree with those from more detailed models of the transcription and translation processes. We present a generalization of previous delay stochastic simulation algorithms which allows both for multiple delays and for distributions of delay times. We show that delay stochastic simulations closely approximate simulations of a detailed transcription model except when two-body effects (e.g. collisions between polymerases on a template strand) are important. Finally, we study a delay stochastic model of prokaryotic transcription and translation which reproduces observations from a recent experimental study in which a single gene was expressed under the control of a repressed lac promoter in E. coli cells. This demonstrates our ability to quantitatively model gene expression using these new methods.
NASA Astrophysics Data System (ADS)
Roussel, Marc R.; Zhu, Rui
2006-12-01
The quantitative modeling of gene transcription and translation requires a treatment of two key features: stochastic fluctuations due to the limited copy numbers of key molecules (genes, RNA polymerases, ribosomes), and delayed output due to the time required for biopolymer synthesis. Recently proposed algorithms allow for efficient simulations of such systems. However, it is critical to know whether the results of delay stochastic simulations agree with those from more detailed models of the transcription and translation processes. We present a generalization of previous delay stochastic simulation algorithms which allows both for multiple delays and for distributions of delay times. We show that delay stochastic simulations closely approximate simulations of a detailed transcription model except when two-body effects (e.g. collisions between polymerases on a template strand) are important. Finally, we study a delay stochastic model of prokaryotic transcription and translation which reproduces observations from a recent experimental study in which a single gene was expressed under the control of a repressed lac promoter in E. coli cells. This demonstrates our ability to quantitatively model gene expression using these new methods.
A stochastic disaggregation algorithm for analysis of change in the sub-daily extreme rainfall
NASA Astrophysics Data System (ADS)
Nazemi, Ali; Elshorbagy, Amin
2014-05-01
The statistical characteristics of local extreme rainfall, particularly at shorter durations, are among the key design parameters for urban storm water collection systems. Recent observations have provided sufficient evidence that the ongoing climate change alters form, pattern, intensity and frequency of precipitation across various temporal and spatial scales. Quantifying and predicting the resulted changes in the extremes, however, remains as a challenging problem, especially for local and shorter duration events. Most importantly, climate models are still unable to produce the extreme rainfall events at global and regional scales. In addition, current simulations of climate models are at much coarser temporal and spatial resolutions than can be readily used in local design applications. Spatial and temporal downscaling methods, therefore, are necessary to bring the climate model simulations into finer scales. To tackle the temporal downscaling problem, we propose a stochastic algorithm, based on the novel notion of Rainfall Distribution Functions (RDFs), to disaggregate the daily rainfall into hourly estimates. In brief, RDFs describe how the historical daily rainfall totals are distributed into hourly segments. By having a set of RDFs, an empirical probability distribution function can be constructed to describe the proportions of daily cumulative rainfall at each hourly time step. These hour-by-hour empirical distribution functions can be used for random generation of hourly rainfall given total daily values. We used this algorithm for disaggregating the daily spring and summer rainfalls in the city of Saskatoon, Saskatchewan, Canada and tested the performance of the disaggregation with respect to reproduction of extremes. In particular, the Intensity-Duration-Frequency (IDF) curves generated based on both historical and reconstructed extremes are compared. The proposed disaggregation scheme is further plugged into an existing daily rainfall generator to
A new model for realistic random perturbations of stochastic oscillators
NASA Astrophysics Data System (ADS)
Dieci, Luca; Li, Wuchen; Zhou, Haomin
2016-08-01
Classical theories predict that solutions of differential equations will leave any neighborhood of a stable limit cycle, if white noise is added to the system. In reality, many engineering systems modeled by second order differential equations, like the van der Pol oscillator, show incredible robustness against noise perturbations, and the perturbed trajectories remain in the neighborhood of a stable limit cycle for all times of practical interest. In this paper, we propose a new model of noise to bridge this apparent discrepancy between theory and practice. Restricting to perturbations from within this new class of noise, we consider stochastic perturbations of second order differential systems that -in the unperturbed case- admit asymptotically stable limit cycles. We show that the perturbed solutions are globally bounded and remain in a tubular neighborhood of the underlying deterministic periodic orbit. We also define stochastic Poincaré map(s), and further derive partial differential equations for the transition density function.
Li, Yun; Wu, Wenqi; Jiang, Qingan; Wang, Jinling
2016-01-01
Based on stochastic modeling of Coriolis vibration gyros by the Allan variance technique, this paper discusses Angle Random Walk (ARW), Rate Random Walk (RRW) and Markov process gyroscope noises which have significant impacts on the North-finding accuracy. A new continuous rotation alignment algorithm for a Coriolis vibration gyroscope Inertial Measurement Unit (IMU) is proposed in this paper, in which the extended observation equations are used for the Kalman filter to enhance the estimation of gyro drift errors, thus improving the north-finding accuracy. Theoretical and numerical comparisons between the proposed algorithm and the traditional ones are presented. The experimental results show that the new continuous rotation alignment algorithm using the extended observation equations in the Kalman filter is more efficient than the traditional two-position alignment method. Using Coriolis vibration gyros with bias instability of 0.1°/h, a north-finding accuracy of 0.1° (1σ) is achieved by the new continuous rotation alignment algorithm, compared with 0.6° (1σ) north-finding accuracy for the two-position alignment and 1° (1σ) for the fixed-position alignment. PMID:27983585
Li, Yun; Wu, Wenqi; Jiang, Qingan; Wang, Jinling
2016-12-13
Based on stochastic modeling of Coriolis vibration gyros by the Allan variance technique, this paper discusses Angle Random Walk (ARW), Rate Random Walk (RRW) and Markov process gyroscope noises which have significant impacts on the North-finding accuracy. A new continuous rotation alignment algorithm for a Coriolis vibration gyroscope Inertial Measurement Unit (IMU) is proposed in this paper, in which the extended observation equations are used for the Kalman filter to enhance the estimation of gyro drift errors, thus improving the north-finding accuracy. Theoretical and numerical comparisons between the proposed algorithm and the traditional ones are presented. The experimental results show that the new continuous rotation alignment algorithm using the extended observation equations in the Kalman filter is more efficient than the traditional two-position alignment method. Using Coriolis vibration gyros with bias instability of 0.1°/h, a north-finding accuracy of 0.1° (1σ) is achieved by the new continuous rotation alignment algorithm, compared with 0.6° (1σ) north-finding accuracy for the two-position alignment and 1° (1σ) for the fixed-position alignment.
Emergence of patterns in random processes. II. Stochastic structure in random events
NASA Astrophysics Data System (ADS)
Newman, William I.
2014-06-01
Random events can present what appears to be a pattern in the length of peak-to-peak sequences in time series and other point processes. Previously, we showed that this was the case in both individual and independently distributed processes as well as for Brownian walks. In addition, we introduced the use of the discrete form of the Langevin equation of statistical mechanics as a device for connecting the two limiting sets of behaviors, which we then compared with a variety of observations from the physical and social sciences. Here, we establish a probabilistic framework via the Smoluchowski equation for exploring the Langevin equation and its expected peak-to-peak sequence lengths, and we introduce a concept we call "stochastic structure in random events," or SSRE. We extend the Brownian model to include antipersistent processes via autoregressive (AR) models. We relate the latter to describe the behavior of Old Faithful Geyser in Yellowstone National Park, and we devise a further test for the validity of the Langevin and AR models. Given our analytic results, we show how the Langevin equation can be adapted to describe population cycles of three to four years observed among many mammalian species in biology.
Emergence of patterns in random processes. II. Stochastic structure in random events.
Newman, William I
2014-06-01
Random events can present what appears to be a pattern in the length of peak-to-peak sequences in time series and other point processes. Previously, we showed that this was the case in both individual and independently distributed processes as well as for Brownian walks. In addition, we introduced the use of the discrete form of the Langevin equation of statistical mechanics as a device for connecting the two limiting sets of behaviors, which we then compared with a variety of observations from the physical and social sciences. Here, we establish a probabilistic framework via the Smoluchowski equation for exploring the Langevin equation and its expected peak-to-peak sequence lengths, and we introduce a concept we call "stochastic structure in random events," or SSRE. We extend the Brownian model to include antipersistent processes via autoregressive (AR) models. We relate the latter to describe the behavior of Old Faithful Geyser in Yellowstone National Park, and we devise a further test for the validity of the Langevin and AR models. Given our analytic results, we show how the Langevin equation can be adapted to describe population cycles of three to four years observed among many mammalian species in biology.
A stochastic learning algorithm for layered neural networks
Bartlett, E.B.; Uhrig, R.E.
1992-12-31
The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given.
An Analysis of Learning Algorithms in Complex Stochastic Environments
2007-06-01
speakers saying vowel phrases and resulted in a significant improvement in predictions during the refinement phase when contexts were added to the...with parameters for the agent, to both take actions and write the percepts it receives to a separate file. These two programs ran in tandem for...sensations, due to the recency threshold limiting the total number of percepts. A comparison of these two learning algorithms shows contrasting styles of
Stochastic Analysis of an Iterative Semi Blind Adaptive Beamforming Algorithm
2009-06-24
beam- forming, demodulation, equalization, and decoding . In each stage, the initial beamformer weights computed by the TDMA training data are refined by...performance of the receiver for the Global System for Mobile commu- nications ( GSM ). In recent years, there has been work on techniques to mitigate the...Viterbi algorithm (SOVA) [5], and decoding . In this paper, we extend the iterative beamformer to in- corporate training-based as well as blind
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
NASA Astrophysics Data System (ADS)
Sabelfeld, K. K.
2015-09-01
A stochastic algorithm for simulation of fluctuation-induced kinetics of H2 formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H2 from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.
Investigation of stochastic radiation transport methods in random heterogeneous mixtures
NASA Astrophysics Data System (ADS)
Reinert, Dustin Ray
Among the most formidable challenges facing our world is the need for safe, clean, affordable energy sources. Growing concerns over global warming induced climate change and the rising costs of fossil fuels threaten conventional means of electricity production and are driving the current nuclear renaissance. One concept at the forefront of international development efforts is the High Temperature Gas-Cooled Reactor (HTGR). With numerous passive safety features and a meltdown-proof design capable of attaining high thermodynamic efficiencies for electricity generation as well as high temperatures useful for the burgeoning hydrogen economy, the HTGR is an extremely promising technology. Unfortunately, the fundamental understanding of neutron behavior within HTGR fuels lags far behind that of more conventional water-cooled reactors. HTGRs utilize a unique heterogeneous fuel element design consisting of thousands of tiny fissile fuel kernels randomly mixed with a non-fissile graphite matrix. Monte Carlo neutron transport simulations of the HTGR fuel element geometry in its full complexity are infeasible and this has motivated the development of more approximate computational techniques. A series of MATLAB codes was written to perform Monte Carlo simulations within HTGR fuel pebbles to establish a comprehensive understanding of the parameters under which the accuracy of the approximate techniques diminishes. This research identified the accuracy of the chord length sampling method to be a function of the matrix scattering optical thickness, the kernel optical thickness, and the kernel packing density. Two new Monte Carlo methods designed to focus the computational effort upon the parameter conditions shown to contribute most strongly to the overall computational error were implemented and evaluated. An extended memory chord length sampling routine that recalls a neutron's prior material traversals was demonstrated to be effective in fixed source calculations containing
Image estimation using doubly stochastic gaussian random field models.
Woods, J W; Dravida, S; Mediavilla, R
1987-02-01
The two-dimensional (2-D) doubly stochastic Gaussian (DSG) model was introduced by one of the authors to provide a complete model for spatial filters which adapt to the local structure in an image signal. Here we present the optimal estimator and 2-D fixed-lag smoother for this DSG model extending earlier work of Ackerson and Fu. As the optimal estimator has an exponentially growing state space, we investigate suboptimal estimators using both a tree and a decision-directed method. Experimental results are presented.
Nested stochastic simulation algorithms for chemical kinetic systems with multiple time scales
E, Weinan; Liu, Di . E-mail: diliu@math.msu.edu; Vanden-Eijnden, Eric
2007-01-20
We present an efficient numerical algorithm for simulating chemical kinetic systems with multiple time scales. This algorithm is an improvement of the traditional stochastic simulation algorithm (SSA), also known as Gillespie's algorithm. It is in the form of a nested SSA and uses an outer SSA to simulate the slow reactions with rates computed from realizations of inner SSAs that simulate the fast reactions. The algorithm itself is quite general and seamless, and it amounts to a small modification of the original SSA. Our analysis of such multi-scale chemical kinetic systems allows us to identify the slow variables in the system, derive effective dynamics on the slow time scale, and provide error estimates for the nested SSA. Efficiency of the nested SSA is discussed using these error estimates, and illustrated through several numerical examples.
Representation of nonlinear random transformations by non-gaussian stochastic neural networks.
Turchetti, Claudio; Crippa, Paolo; Pirani, Massimiliano; Biagetti, Giorgio
2008-06-01
The learning capability of neural networks is equivalent to modeling physical events that occur in the real environment. Several early works have demonstrated that neural networks belonging to some classes are universal approximators of input-output deterministic functions. Recent works extend the ability of neural networks in approximating random functions using a class of networks named stochastic neural networks (SNN). In the language of system theory, the approximation of both deterministic and stochastic functions falls within the identification of nonlinear no-memory systems. However, all the results presented so far are restricted to the case of Gaussian stochastic processes (SPs) only, or to linear transformations that guarantee this property. This paper aims at investigating the ability of stochastic neural networks to approximate nonlinear input-output random transformations, thus widening the range of applicability of these networks to nonlinear systems with memory. In particular, this study shows that networks belonging to a class named non-Gaussian stochastic approximate identity neural networks (SAINNs) are capable of approximating the solutions of large classes of nonlinear random ordinary differential transformations. The effectiveness of this approach is demonstrated and discussed by some application examples.
On the relationship between Gaussian stochastic blockmodels and label propagation algorithms
NASA Astrophysics Data System (ADS)
Zhang, Junhao; Chen, Tongfei; Hu, Junfeng
2015-03-01
The problem of community detection has received great attention in recent years. Many methods have been proposed to discover communities in networks. In this paper, we propose a Gaussian stochastic blockmodel that uses Gaussian distributions to fit weight of edges in networks for non-overlapping community detection. The maximum likelihood estimation of this model has the same objective function as general label propagation with node preference. The node preference of a specific vertex turns out to be a value proportional to the intra-community eigenvector centrality (the corresponding entry in principal eigenvector of the adjacency matrix of the subgraph inside that vertex's community) under maximum likelihood estimation. Additionally, the maximum likelihood estimation of a constrained version of our model is highly related to another extension of the label propagation algorithm, namely, the label propagation algorithm under constraint. Experiments show that the proposed Gaussian stochastic blockmodel performs well on various benchmark networks.
Drawert, Brian; Lawson, Michael J.; Petzold, Linda; Khammash, Mustafa
2010-01-01
We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm. PMID:20170209
Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel
2011-09-01
The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.
Zyubin, M V; Kashurnikov, V A
2004-03-01
We propose a universal stochastic series expansion (SSE) method for the simulation of the Heisenberg model with arbitrary spin and the Bose-Hubbard model with interaction. We report the calculations involving soft-core bosons with interaction by the SSE method. Moreover, we develop a simple procedure for increased efficiency of the algorithm. From calculation of integrated autocorrelation times we conclude that the method is efficient for both models and essentially eliminates the critical slowing down problem.
Adaptive and Distributed Algorithms for Vehicle Routing in a Stochastic and Dynamic Environment
2010-11-18
stochastic and dynamic vehicle routing problems,” PhD Thesis, Dept. of Civil and Environmental Engineering , Massachusetts Institute of Technology ... Technology (MIT), Cam- bridge, in 2001. From 2001 to 2004, he was an Assistant Professor of aerospace engineering at the University of Illinois at Urbana...system. The general problem is known as the m-vehicle Dynamic Traveling Repairman Problem (m-DTRP). The best previously known con- trol algorithms rely on
NASA Astrophysics Data System (ADS)
DeSantis, Emilio; Marinelli, Carlo
2007-09-01
We introduce and study a class of infinite-horizon non-zero-sum non-cooperative stochastic games with infinitely many interacting agents using ideas of statistical mechanics. First we show, in the general case of asymmetric interactions, the existence of a strategy that allows any player to eliminate losses after a finite random time. In the special case of symmetric interactions, we also prove that, as time goes to infinity, the game converges to a Nash equilibrium. Moreover, assuming that all agents adopt the same strategy, using arguments related to those leading to perfect simulation algorithms, spatial mixing and ergodicity are proved. In turn, ergodicity allows us to prove 'fixation', i.e. players will adopt a constant strategy after a finite time. The resulting dynamics is related to zero-temperature Glauber dynamics on random graphs of possibly infinite volume.
Genetic Algorithm and Tabu Search for Vehicle Routing Problems with Stochastic Demand
NASA Astrophysics Data System (ADS)
Ismail, Zuhaimy; Irhamah
2010-11-01
This paper presents a problem of designing solid waste collection routes, involving scheduling of vehicles where each vehicle begins at the depot, visits customers and ends at the depot. It is modeled as a Vehicle Routing Problem with Stochastic Demands (VRPSD). A data set from a real world problem (a case) is used in this research. We developed Genetic Algorithm (GA) and Tabu Search (TS) procedure and these has produced the best possible result. The problem data are inspired by real case of VRPSD in waste collection. Results from the experiment show the advantages of the proposed algorithm that are its robustness and better solution qualities.
Modeling of stochastic dynamics of time-dependent flows under high-dimensional random forcing
NASA Astrophysics Data System (ADS)
Babaee, Hessam; Karniadakis, George
2016-11-01
In this numerical study the effect of high-dimensional stochastic forcing in time-dependent flows is investigated. To efficiently quantify the evolution of stochasticity in such a system, the dynamically orthogonal method is used. In this methodology, the solution is approximated by a generalized Karhunen-Loeve (KL) expansion in the form of u (x , t ω) = u ̲ (x , t) + ∑ i = 1 N yi (t ω)ui (x , t) , in which u ̲ (x , t) is the stochastic mean, the set of ui (x , t) 's is a deterministic orthogonal basis and yi (t ω) 's are the stochastic coefficients. Explicit evolution equations for u ̲ , ui and yi are formulated. The elements of the basis ui (x , t) 's remain orthogonal for all times and they evolve according to the system dynamics to capture the energetically dominant stochastic subspace. We consider two classical fluid dynamics problems: (1) flow over a cylinder, and (2) flow over an airfoil under up to one-hundred dimensional random forcing. We explore the interaction of intrinsic with extrinsic stochasticity in these flows. DARPA N66001-15-2-4055, Office of Naval Research N00014-14-1-0166.
SOS! An algorithm and software for the stochastic optimization of stimuli.
Armstrong, Blair C; Watson, Christine E; Plaut, David C
2012-09-01
The characteristics of the stimuli used in an experiment critically determine the theoretical questions the experiment can address. Yet there is relatively little methodological support for selecting optimal sets of items, and most researchers still carry out this process by hand. In this research, we present SOS, an algorithm and software package for the stochastic optimization of stimuli. SOS takes its inspiration from a simple manual stimulus selection heuristic that has been formalized and refined as a stochastic relaxation search. The algorithm rapidly and reliably selects a subset of possible stimuli that optimally satisfy the constraints imposed by an experimenter. This allows the experimenter to focus on selecting an optimization problem that suits his or her theoretical question and to avoid the tedious task of manually selecting stimuli. We detail how this optimization algorithm, combined with a vocabulary of constraints that define optimal sets, allows for the quick and rigorous assessment and maximization of the internal and external validity of experimental items. In doing so, the algorithm facilitates research using factorial, multiple/mixed-effects regression, and other experimental designs. We demonstrate the use of SOS with a case study and discuss other research situations that could benefit from this tool. Support for the generality of the algorithm is demonstrated through Monte Carlo simulations on a range of optimization problems faced by psychologists. The software implementation of SOS and a user manual are provided free of charge for academic purposes as precompiled binaries and MATLAB source files at http://sos.cnbc.cmu.edu.
NASA Astrophysics Data System (ADS)
Nakau, K.; Fukuda, M.; Nagamine, Y.
2008-12-01
Performance of algorithm to detect wild fire was remarkably improved as switching from AVHRR to MODIS with MOD14 algorithm. However, we still have many false alarm and omission errors in boreal forest fire and tundra fire. One of the reasons is that algorithm is that the essence of fire detection is defined as 1 dimensional stochastic test. However, the variable for the stochastic test is not efficiently chosen. Therefore, we will propose a improved algorithm modified from MOD14 algorithm and validate wild fire detection algorithms for boreal forest. To improve the algorithm, we used stochastic test based on 2-dimensional distribution. To validate the wild fire detection algorithm, hotspot pixel perimeters dataset is compared with observed wild fire by pilots of passenger flights. This wild fire cooperative observation was established in 2003 and observation method has been improved year by year. Based on comparison, author found one of bottle necks of wild fire detection as 1 dimensional contextual threshold. Therefore, author modified MOD14 using stochastic test based on 2-dimensional distribution. As a result of this improvement, authors found that the proposed algorithm detects 16% more hotspots without any more false alarms comparing to existing MOD14 algorithm in preliminary validation. This also means 14% less false alarm rate comparing to existing MOD14. More precise validation result will be presented. The proposed algorithm is operationally used in IJIS fire monitor website.
NASA Astrophysics Data System (ADS)
Rahman, Tuan A. Z.; Jalil, N. A. Abdul; As'arry, A.; Raja Ahmad, R. K.
2017-06-01
Support vector machine (SVM) has been known as one-state-of-the-art pattern recognition method. However, the SVM performance is particularly influenced byits parameter selection. This paper presents the parameter optimization of an SVM classifier using chaos-enhanced stochastic fractal search (SFS) algorithm to classify conditions of a ball bearing. The vibration data for normal and damaged conditions of the ball bearing system obtained from the Case Western Reserve University Bearing Data Centre. Features based on time and frequency domains were generated to characterize the ball bearing conditions. The performance of chaos-enhanced SFS algorithms in comparison to their predecessor algorithm is evaluated. In conclusion, the injection of chaotic maps into SFS algorithm improved its convergence speed and searching accuracy based on the statistical results of CEC 2015 benchmark test suites and their application to ball bearing fault diagnosis.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Stochastic perturbations in open chaotic systems: random versus noisy maps.
Bódai, Tamás; Altmann, Eduardo G; Endler, Antonio
2013-04-01
We investigate the effects of random perturbations on fully chaotic open systems. Perturbations can be applied to each trajectory independently (white noise) or simultaneously to all trajectories (random map). We compare these two scenarios by generalizing the theory of open chaotic systems and introducing a time-dependent conditionally-map-invariant measure. For the same perturbation strength we show that the escape rate of the random map is always larger than that of the noisy map. In random maps we show that the escape rate κ and dimensions D of the relevant fractal sets often depend nonmonotonically on the intensity of the random perturbation. We discuss the accuracy (bias) and precision (variance) of finite-size estimators of κ and D, and show that the improvement of the precision of the estimations with the number of trajectories N is extremely slow ([proportionality]1/lnN). We also argue that the finite-size D estimators are typically biased. General theoretical results are combined with analytical calculations and numerical simulations in area-preserving baker maps.
NASA Astrophysics Data System (ADS)
Van Willigenburg, L. Gerard; De Koning, Willem L.
2013-02-01
Two different descriptions are used in the literature to formulate the optimal dynamic output feedback control problem for linear dynamical systems with white stochastic parameters and quadratic criteria, called the optimal compensation problem. One describes the matrix valued white stochastic processes involved, using a sum of deterministic matrices each one multiplied by a scalar stochastic process that is independent of the others. Another, that is more general and concise, uses Kronecker products instead. This article relates the statistics of both descriptions and shows their advantages and disadvantages. As to the first description, an important result that comes out is the minimum number of matrices multiplied by scalar, independent, stochastic processes needed to represent a certain matrix valued white stochastic process, together with an associated minimal representation. As to the second description, an important result concerns the generation of all Kronecker products that represent relevant statistics. Both results facilitate the specification of statistics of systems with white stochastic parameters. The second part of this article further exploits these results to perform an U-D factorisation of an algorithm to compute optimal dynamic output feedback controllers (optimal compensators) for linear discrete-time systems with white stochastic parameters and quadratic sum criteria. U-D factorisation of this type of algorithm is new. By solving several numerical examples, the U-D factored algorithm is compared with a conventional algorithm.
NASA Astrophysics Data System (ADS)
Negri, Rogério Galante; da Silva, Wagner Barreto; Mendes, Tatiana Sussel Gonçalves
2016-10-01
The availability of polarimetric synthetic aperture radar (PolSAR) images has increased, and consequently, the classification of such images has received immense attention. Among different classification methods in the literature, it is possible to distinguish them according to learning paradigm and approach. Unsupervised methods have as advantage the independence of labeled data for training. Regarding the approach, image classification can be performed based on its individual pixels or on previously identified regions in the image. Previous studies verified that the region-based classification of PolSAR images using stochastic distances can produce better results in comparison with the pixel-based. Faced with the independence of training data by unsupervised methods and the potential of the region-based approach with stochastic distances, this study proposes a version of the unsupervised K-means algorithm for PolSAR region-based classification based on stochastic distances. The Bhattacharyya stochastic distance between Wishart distributions was adopted to measure the dissimilarity among regions of the PolSAR image. Additionally, a measure was proposed to compare unsupervised classification results. Two case studies that consider real and simulated images were conducted, and the results showed that the proposed version of K-means achieves higher accuracy values in comparison with the classic version.
Schmidt, Deena R; Thomas, Peter J
2014-04-17
Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin-Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán's approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process.
Markov Random Fields, Stochastic Quantization and Image Analysis
1990-01-01
Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.
A non-stochastic Coulomb collision algorithm for particle-in-cell methods
NASA Astrophysics Data System (ADS)
Chen, Guangye; Chacon, Luis
2016-10-01
Coulomb collision modules in PIC simulations are typically Monte-Carlo-based. Monte Carlo is attractive for its simplicity, efficiency in high dimensions, and conservation properties. However, it is noisy, of low temporal order (typically O(√{ Δt }), and has to resolve the collision frequency for accuracy. In this study, we explore a non-stochastic, multiscale alternative to Monte Carlo for PIC. The approach is based on a Green-function-based reformulation of the Vlasov-Fokker-Planck equation, which can be readily incorporated in modern multiscale collisionless PIC algorithms. An asymptotic-preserving operator splitting approach allows the collisional step to be treated independently from the particles while preserving the multiscale character of the method. A significant element of novelty in our algorithm is the use of a machine learning algorithm that avoid a velocity space mesh for the collision step. The resulting algorithm is non-stochastic and first-order-accurate in time. We will demonstrate the method with several relaxation examples.
Klein, Daniel J; Baym, Michael; Eckhoff, Philip
2014-01-01
Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by [Formula: see text]), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which "success" is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria.
Dynamic response analysis of linear stochastic truss structures under stationary random excitation
NASA Astrophysics Data System (ADS)
Gao, Wei; Chen, Jianjun; Cui, Mingtao; Cheng, Yi
2005-03-01
This paper presents a new method for the dynamic response analysis of linear stochastic truss structures under stationary random excitation. Considering the randomness of the structural physical parameters and geometric dimensions, the computational expressions of the mean value, variance and variation coefficient of the mean square value of the structural displacement and stress response under the stationary random excitation are developed by means of the random variable's functional moment method and the algebra synthesis method from the expressions of structural stationary random response of the frequency domain. The influences of the randomness of the structural physical parameters and geometric dimensions on the randomness of the mean square value of the structural displacement and stress response are inspected by the engineering examples.
Random-walk-based stochastic modeling of three-dimensional fiber systems.
Altendorf, Hellen; Jeulin, Dominique
2011-04-01
For the simulation of fiber systems, there exist several stochastic models: systems of straight nonoverlapping fibers, systems of overlapping bending fibers, or fiber systems created by sedimentation. However, there is a lack of models providing dense, nonoverlapping fiber systems with a given random orientation distribution and a controllable level of bending. We introduce a new stochastic model in this paper that generalizes the force-biased packing approach to fibers represented as chains of balls. The starting configuration is modeled using random walks, where two parameters in the multivariate von Mises-Fisher orientation distribution control the bending. The points of the random walk are associated with a radius and the current orientation. The resulting chains of balls are interpreted as fibers. The final fiber configuration is obtained as an equilibrium between repulsion forces avoiding crossing fibers and recover forces ensuring the fiber structure. This approach provides high volume fractions up to 72.0075%.
Stochastic modeling and vibration analysis of rotating beams considering geometric random fields
NASA Astrophysics Data System (ADS)
Choi, Chan Kyu; Yoo, Hong Hee
2017-02-01
Geometric parameters such as the thickness and width of a beam are random for various reasons including manufacturing tolerance and operation wear. Due to these random parameter properties, the vibration characteristics of the structure are also random. In this paper, we derive equations of motion to conduct stochastic vibration analysis of a rotating beam using the assumed mode method and stochastic spectral method. The accuracy of the proposed method is first verified by comparing analysis results to those obtained with Monte-Carlo simulation (MCS). The efficiency of the proposed method is then compared to that of MCS. Finally, probability densities of various modal and transient response characteristics of rotating beams are obtained with the proposed method.
NASA Astrophysics Data System (ADS)
Ding, Derui; Shen, Yuxuan; Song, Yan; Wang, Yongxiong
2016-07-01
This paper is concerned with the state estimation problem for a class of discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks. The stochastic nonlinearity described by statistical means which covers several classes of well-studied nonlinearities as special cases is taken into discussion. The randomly occurring deception attacks are modelled by a set of random variables obeying Bernoulli distributions with given probabilities. The purpose of the addressed state estimation problem is to design an estimator with hope to minimize the upper bound for estimation error covariance at each sampling instant. Such an upper bound is minimized by properly designing the estimator gain. The proposed estimation scheme in the form of two Riccati-like difference equations is of a recursive form. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed scheme.
NASA Astrophysics Data System (ADS)
Zhao, Xiangrong; Xu, Wei; Yang, Yongge; Wang, Xiying
2016-06-01
This paper deals with the stochastic responses of a viscoelastic-impact system under additive and multiplicative random excitations. The viscoelastic force is replaced by a combination of stiffness and damping terms. The non-smooth transformation of the state variables is utilized to transform the original system to a new system without the impact term. The stochastic averaging method is applied to yield the stationary probability density functions. The validity of the analytical method is verified by comparing the analytical results with the numerical results. It is invaluable to note that the restitution coefficient, the viscoelastic parameters and the damping coefficients can induce the occurrence of stochastic P-bifurcation. Furthermore, the joint stationary probability density functions with three peaks are explored.
Collision-Resolution Algorithms and Random-Access Communications.
1980-04-01
DOCUMENTATION PAGE BEFORE COMPLETING FORM 2GOVT ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER S. TYPJLCI REPORT IS PUMAERED -- COMMNIC ~fus S.PERFORMING...performance of random-access algorithms that in- corporate these algorithms. The first and most important of these is the con- ditional mean CRI
A stochastic model of randomly accelerated walkers for human mobility
Gallotti, Riccardo; Bazzani, Armando; Rambaldi, Sandro; Barthelemy, Marc
2016-01-01
Recent studies of human mobility largely focus on displacements patterns and power law fits of empirical long-tailed distributions of distances are usually associated to scale-free superdiffusive random walks called Lévy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. Here we show, on the basis of a data set describing the trajectories of 780,000 private vehicles in Italy, that the Lévy flight model cannot explain the behaviour of travel times and speeds. We therefore introduce a class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of mobility. PMID:27573984
A stochastic model of randomly accelerated walkers for human mobility
NASA Astrophysics Data System (ADS)
Gallotti, Riccardo; Bazzani, Armando; Rambaldi, Sandro; Barthelemy, Marc
2016-08-01
Recent studies of human mobility largely focus on displacements patterns and power law fits of empirical long-tailed distributions of distances are usually associated to scale-free superdiffusive random walks called Lévy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. Here we show, on the basis of a data set describing the trajectories of 780,000 private vehicles in Italy, that the Lévy flight model cannot explain the behaviour of travel times and speeds. We therefore introduce a class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of mobility.
Vrettas, Michail D; Opper, Manfred; Cornford, Dan
2015-01-01
This work introduces a Gaussian variational mean-field approximation for inference in dynamical systems which can be modeled by ordinary stochastic differential equations. This new approach allows one to express the variational free energy as a functional of the marginal moments of the approximating Gaussian process. A restriction of the moment equations to piecewise polynomial functions, over time, dramatically reduces the complexity of approximate inference for stochastic differential equation models and makes it comparable to that of discrete time hidden Markov models. The algorithm is demonstrated on state and parameter estimation for nonlinear problems with up to 1000 dimensional state vectors and compares the results empirically with various well-known inference methodologies.
An adaptive algorithm for simulation of stochastic reaction-diffusion processes
Ferm, Lars Hellander, Andreas Loetstedt, Per
2010-01-20
We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.
NASA Astrophysics Data System (ADS)
Sochi, Taha
2016-09-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
R-leaping: accelerating the stochastic simulation algorithm by reaction leaps.
Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros
2006-08-28
A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.
R-leaping: Accelerating the stochastic simulation algorithm by reaction leaps
NASA Astrophysics Data System (ADS)
Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros
2006-08-01
A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.
Scheduling Algorithm with Power Allocation for Random Unitary Beamforming
NASA Astrophysics Data System (ADS)
Tsuchiya, Yuki; Ohtsuki, Tomoaki; Kaneko, Toshinobu
Random unitary beamforming is one of the schemes that can reduce the amount of feedback information in multiuser diversity techniques with multiple-antenna downlink transmission. In Multiple-Input Multiple-Output (MIMO) systems, throughput performance is greatly improved using AMC (Adaptive Modulation and Coding). Throughput performance is also improved by allocating power among streams appropriately. In random unitary beamforming, the transmitter has only partial channel state information (CSI) of each receiver. Thus, it is difficult for random unitary beamforming to use conventional power allocation methods that assumes that all receivers has full CSI. In this paper, we propose a new scheduling algorithm with power allocation for downlink random unitary beamforming that improves throughput performance without full CSI. We provide numerical results of the proposed scheduling algorithm and compare them to those of the conventional random unitary beamforming scheduling algorithm. We show that random unitary beamforming achieves the best system throughput performance with two transmit antennas. We also show that the proposed algorithm attains higher throughput with the small increase of feedback than the random unitary beamforming scheduling algorithm.
NASA Astrophysics Data System (ADS)
Gu, Anhui; Li, Yangrong
The paper is devoted to establishing a combination of sufficient criterion for the existence and upper semi-continuity of random attractors for stochastic lattice dynamical systems. By relying on a family of random systems itself, we first set up the abstract result when it is convergent, uniformly absorbing and uniformly random when asymptotically null in the phase space. Then we apply the results to the second-order lattice dynamical system driven by multiplicative white noise. It is indicated that the criterion depending on the dynamical system itself seems more applicable than the existing ones to lattice differential models.
NASA Astrophysics Data System (ADS)
Ma, Juan; Gao, Wei; Wriggers, Peter; Wu, Tao; Sahraee, Shahab
2010-04-01
A new two-factor method based on the probability and the fuzzy sets theory is used for the analyses of the dynamic response and reliability of fuzzy-random truss systems under the stationary stochastic excitation. Considering the fuzzy-randomness of the structural physical parameters and geometric dimensions simultaneously, the fuzzy-random correlation function matrix of structural displacement response in time domain and the fuzzy-random mean square values of structural dynamic response in frequency domain are developed by using the two-factor method, and the fuzzy numerical characteristics of dynamic responses are then derived. Based on numerical characteristics of structural fuzzy-random dynamic responses, the structural fuzzy-random dynamic reliability and its fuzzy numerical characteristic are obtained from the Poisson equation. The effects of the uncertainty of the structural parameters on structural dynamic response and reliability are illustrated via two engineering examples and some important conclusions are obtained.
On efficient randomized algorithms for finding the PageRank vector
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Dmitriev, D. Yu.
2015-03-01
Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.
Stochastic climate dynamics: Random attractors and time-dependent invariant measures
NASA Astrophysics Data System (ADS)
Ghil, Michael
2010-05-01
This talk reports on attempts at the unification of two approaches that have dominated theoretical climate dynamics since its inception in the 1960s: the nonlinear deterministic (Lorenz, JAS, 1963) approach and the linear stochastic one (Hasselmann, Tellus, 1976). This unification, via the theory of random dynamical systems (RDS), allows one to consider the detailed geometric structure of the random attractors associated with nonlinear, stochastically perturbed systems. These attractors extend the concept of strange attractors from autonomous dynamical systems to non-autonomous systems with random forcing. A high-resolution numerical study of two "toy" models is carried out in their respective phase spaces; it allows one to obtain a good approximation of their global random attractors, as well as of the time-dependent invariant measures supported by these attractors. The latter measures are shown to be random Sinai-Ruelle-Bowen (SRB) measures; such measures have an intuitive, physical interpretation, obtained essentially by "flowing" the entire phase space onto the attractor. The first of the two models studied herein is a stochastically forced version of the classical Lorenz (1963) model. The second one is a low-dimensional, nonlinear stochastic model of the El Nino-Southern Oscillation (ENSO), based on that of Timmermann and Jin (GRL, 2002). In spite of their highly idealized character, both these models are of fundamental interest for climate dynamics and provide insight into its predictability. This talk represents joint work with Mickael D. Chekroun (Ecole Normale Superieure, Paris, France, and University of California, Los Angeles, USA; chekro@lmd.ens.fr) and Eric Simonnet (Institut Non-Lineaire de Nice, Sophia Antipolis, France; Eric.Simonnet@inln.cnrs.fr).
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
Komarov, Ivan; D'Souza, Roshan M.
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×−120× performance gain over various state-of-the-art serial algorithms when simulating different types of models. PMID:23152751
Application of stochastic weighted algorithms to a multidimensional silica particle model
Menz, William J.; Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus
2013-09-01
Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associated majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.
NASA Astrophysics Data System (ADS)
Alfonso, Lester; Zamora, Jose; Cruz, Pedro
2015-04-01
The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.
Gutjahr, Walter J
2012-01-01
For stochastic multi-objective combinatorial optimization (SMOCO) problems, the adaptive Pareto sampling (APS) framework has been proposed, which is based on sampling and on the solution of deterministic multi-objective subproblems. We show that when plugging in the well-known simple evolutionary multi-objective optimizer (SEMO) as a subprocedure into APS, ε-dominance has to be used to achieve fast convergence to the Pareto front. Two general theorems are presented indicating how runtime complexity results for APS can be derived from corresponding results for SEMO. This may be a starting point for the runtime analysis of evolutionary SMOCO algorithms.
NASA Astrophysics Data System (ADS)
Islam, Md Shafiqul
Let T = {τ1(x), τ2(x),…, τK(x); p1(x), p2(x),…, pK(x)} be a position dependent random map which possesses a unique absolutely continuous invariant measure \\hat{μ} with probability density function \\hat{f}. We consider a family {TN}N≥1 of stochastic perturbations TN of the random map T. Each TN is a Markov process with the transition density ∑ {k = 1}K pk(x) qN(τ k(x), \\cdot), where qN(x, \\sdot) is a doubly stochastic periodic and separable kernel. Using Fourier approximation, we construct a finite dimensional approximation PN to a perturbed Perron-Frobenius operator. Let fN* be a fixed point of PN. We show that {fN*} converges in L1 to \\hat{f}.
A Randomized Approximate Nearest Neighbors Algorithm
2010-09-14
Introduction to Harmonic Analysis, Second edition, Dover Publi- cations (1976). [12] D. Knuth , Seminumerical Algorithms, vol. 2 of The Art of Computer ...ES) Yale University ,Department of Computer Science,New Haven,CT,06520 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY...may further assume that t > a2 and evaluate the cdf of D−a at t by computing the probability of D−a being smaller than t to obtain FD−a (t) = ∫ t a2
Random vibration of nonlinear beams by the new stochastic linearization technique
NASA Technical Reports Server (NTRS)
Fang, J.
1994-01-01
In this paper, the beam under general time dependent stationary random excitation is investigated, when exact solution is unavailable. Numerical simulations are carried out to compare its results with those yielded by the conventional linearization techniques. It is found that the modified version of the stochastic linearization technique yields considerably more accurate results for the mean square displacement of the beam than the conventional equivalent linearization technique, especially in the case of large nonlinearity.
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-12-12
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-01-01
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734
Stochastic diffusion and Kolmogorov entropy in regular and random Hamiltonians
Isichenko, M.B. . Inst. for Fusion Studies Kurchatov Inst. of Atomic Energy, Moscow ); Horton, W. . Inst. for Fusion Studies); Kim, D.E.; Heo, E.G.; Choi, D.I. )
1992-05-01
The scalings of the E x B turbulent diffusion coefficient D and the Kolmogorov entropy K with the potential amplitude {phi} {sup {approximately}} of the fluctuation are studied using the geometrical analysis of closed and extended particle orbits for several types of drift Hamiltonians. The high-amplitude scalings , D {proportional to} {phi} {sup {approximately} 2} or {phi} {sup {approximately} 0} and K {proportional to} log {phi} {sup {approximately}}, are shown to arise from different forms of a periodic (four-wave) Hamiltonian {phi}{sup {approximately}} (x,y,t), thereby explaining the controversy in earlier numerical results. For a quasi-random (six-wave) Hamiltonian numerical data for the diffusion D {proportional to} {phi} {sup {approximately} 0.92 {plus minus} 0.04} and the Kolmogorov entropy K {proportional to} {phi} {sup {approximately} 0.56 {plus minus} 0.17} are presented and compared with the percolation theory predictions D {sub p} {proportional to} {phi} {sup {approximately} 0.7}, K {sub p} {proportional to} {phi} {sup {approximately} 0.5}. To study the turbulent diffusion in a general form of Hamiltonian, a new approach of the series expansion of the Lagrangian velocity correlation function is proposed and discussed.
Stochastic diffusion and Kolmogorov entropy in regular and random Hamiltonians
Isichenko, M.B. |; Horton, W.; Kim, D.E.; Heo, E.G.; Choi, D.I.
1992-05-01
The scalings of the E x B turbulent diffusion coefficient D and the Kolmogorov entropy K with the potential amplitude {phi} {sup {approximately}} of the fluctuation are studied using the geometrical analysis of closed and extended particle orbits for several types of drift Hamiltonians. The high-amplitude scalings , D {proportional_to} {phi} {sup {approximately} 2} or {phi} {sup {approximately} 0} and K {proportional_to} log {phi} {sup {approximately}}, are shown to arise from different forms of a periodic (four-wave) Hamiltonian {phi}{sup {approximately}} (x,y,t), thereby explaining the controversy in earlier numerical results. For a quasi-random (six-wave) Hamiltonian numerical data for the diffusion D {proportional_to} {phi} {sup {approximately} 0.92 {plus_minus} 0.04} and the Kolmogorov entropy K {proportional_to} {phi} {sup {approximately} 0.56 {plus_minus} 0.17} are presented and compared with the percolation theory predictions D {sub p} {proportional_to} {phi} {sup {approximately} 0.7}, K {sub p} {proportional_to} {phi} {sup {approximately} 0.5}. To study the turbulent diffusion in a general form of Hamiltonian, a new approach of the series expansion of the Lagrangian velocity correlation function is proposed and discussed.
Using genetic algorithm to solve a new multi-period stochastic optimization model
NASA Astrophysics Data System (ADS)
Zhang, Xin-Li; Zhang, Ke-Cun
2009-09-01
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.
The study of randomized visual saliency detection algorithm.
Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing
2013-01-01
Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results.
The Study of Randomized Visual Saliency Detection Algorithm
Xu, Weihong; Kuang, Fangjun; Gao, Shangbing
2013-01-01
Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results. PMID:24382980
Application of stochastic weighted algorithms to a multidimensional silica particle model
NASA Astrophysics Data System (ADS)
Menz, William J.; Patterson, Robert I. A.; Wagner, Wolfgang; Kraft, Markus
2013-09-01
This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associated majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83-98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.
Stochastic interference of fluorescence radiation in random media with large inhomogeneities
NASA Astrophysics Data System (ADS)
Zimnyakov, D. A.; Asharchuk, I. A.; Yuvchenko, S. A.; Sviridov, A. P.
2017-03-01
Stochastic interference of fluorescence light outgoing from a dye-doped coarse-grained random medium, which was pumped by the continuous-wave laser radiation, was experimentally studied. It was found that the contrast of random interference patterns highly correlates with the wavelength-dependent fluorescence intensity and reaches its minimum in the vicinity of the cusp of emission spectrum. The decay in the contrast of spectrally selected speckle patterns was interpreted in terms of the pathlength distribution broadening for fluorescence radiation propagating in the medium. This broadening is presumably caused by the wavelength-dependent negative absorption of the medium.
NASA Astrophysics Data System (ADS)
Pivovarov, Dmytro; Steinmann, Paul
2016-12-01
In the current work we apply the stochastic version of the FEM to the homogenization of magneto-elastic heterogeneous materials with random microstructure. The main aim of this study is to capture accurately the discontinuities appearing at matrix-inclusion interfaces. We demonstrate and compare three different techniques proposed in the literature for the purely mechanical problem, i.e. global, local and enriched stochastic basis functions. Moreover, we demonstrate the implementation of the isoparametric concept in the enlarged physical-stochastic product space. The Gauss integration rule in this multidimensional space is discussed. In order to design a realistic stochastic Representative Volume Element we analyze actual scans obtained by electron microscopy and provide numerical studies of the micro particle distribution. The SFEM framework described in our previous work (Pivovarov and Steinmann in Comput Mech 57(1): 123-147, 2016) is extended to the case of the magneto-elastic materials. To this end, the magneto-elastic energy function is used, and the corresponding hyper-tensors of the magneto-elastic problem are introduced. In order to estimate the methods' accuracy we performed a set of simulations for elastic and magneto-elastic problems using three different SFEM modifications. All results are compared with "brute-force" Monte-Carlo simulations used as reference solution.
Steady state and mean recurrence time for random walks on stochastic temporal networks
NASA Astrophysics Data System (ADS)
Speidel, Leo; Lambiotte, Renaud; Aihara, Kazuyuki; Masuda, Naoki
2015-01-01
Random walks are basic diffusion processes on networks and have applications in, for example, searching, navigation, ranking, and community detection. Recent recognition of the importance of temporal aspects on networks spurred studies of random walks on temporal networks. Here we theoretically study two types of event-driven random walks on a stochastic temporal network model that produces arbitrary distributions of interevent times. In the so-called active random walk, the interevent time is reinitialized on all links upon each movement of the walker. In the so-called passive random walk, the interevent time is reinitialized only on the link that has been used the last time, and it is a type of correlated random walk. We find that the steady state is always the uniform density for the passive random walk. In contrast, for the active random walk, it increases or decreases with the node's degree depending on the distribution of interevent times. The mean recurrence time of a node is inversely proportional to the degree for both active and passive random walks. Furthermore, the mean recurrence time does or does not depend on the distribution of interevent times for the active and passive random walks, respectively.
NASA Astrophysics Data System (ADS)
Roy, Soumen; Sengupta, Anand S.; Thakor, Nilay
2017-05-01
Astrophysical compact binary systems consisting of neutron stars and black holes are an important class of gravitational wave (GW) sources for advanced LIGO detectors. Accurate theoretical waveform models from the inspiral, merger, and ringdown phases of such systems are used to filter detector data under the template-based matched-filtering paradigm. An efficient grid over the parameter space at a fixed minimal match has a direct impact on the overall time taken by these searches. We present a new hybrid geometric-random template placement algorithm for signals described by parameters of two masses and one spin magnitude. Such template banks could potentially be used in GW searches from binary neutron stars and neutron star-black hole systems. The template placement is robust and is able to automatically accommodate curvature and boundary effects with no fine-tuning. We also compare these banks against vanilla stochastic template banks and show that while both are equally efficient in the fitting-factor sense, the bank sizes are ˜25 % larger in the stochastic method. Further, we show that the generation of the proposed hybrid banks can be sped up by nearly an order of magnitude over the stochastic bank. Generic issues related to optimal implementation are discussed in detail. These improvements are expected to directly reduce the computational cost of gravitational wave searches.
2014-01-01
Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin–Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán’s approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process. PMID:24742077
Research on machine learning framework based on random forest algorithm
NASA Astrophysics Data System (ADS)
Ren, Qiong; Cheng, Hui; Han, Hai
2017-03-01
With the continuous development of machine learning, industry and academia have released a lot of machine learning frameworks based on distributed computing platform, and have been widely used. However, the existing framework of machine learning is limited by the limitations of machine learning algorithm itself, such as the choice of parameters and the interference of noises, the high using threshold and so on. This paper introduces the research background of machine learning framework, and combined with the commonly used random forest algorithm in machine learning classification algorithm, puts forward the research objectives and content, proposes an improved adaptive random forest algorithm (referred to as ARF), and on the basis of ARF, designs and implements the machine learning framework.
NASA Astrophysics Data System (ADS)
Tompson, A. F. B.; Mellors, R. J.; Dyer, K.; Yang, X.; Chen, M.; Trainor Guitton, W.; Wagoner, J. L.; Ramirez, A. L.
2014-12-01
A stochastic joint inverse algorithm is used to analyze diverse geophysical and hydrologic data associated with a geothermal prospect. The approach uses a Markov Chain Monte Carlo (MCMC) global search algorithm to develop an ensemble of hydrothermal groundwater flow models that are most consistent with the observations. The algorithm utilizes an initial conceptual model descriptive of structural (geology), parametric (permeability) and hydrothermal (saturation, temperature) characteristics of the geologic system. Initial (a-priori) estimates of uncertainty in these characteristics are used to drive simulations of hydrothermal fluid flow and related geophysical processes in a large number of random realizations of the conceptual geothermal system spanning these uncertainties. The process seeks to improve the conceptual model by developing a ranked subset of model realizations that best match all available data within a specified norm or tolerance. Statistical (posterior) characteristics of these solutions reflect reductions in the a-priori uncertainties. The algorithm has been tested on a geothermal prospect located at Superstition Mountain, California and has been successful in creating a suite of models compatible with available temperature, surface resistivity, and magnetotelluric (MT) data. Although the MCMC method is highly flexible and capable of accommodating multiple and diverse datasets, a typical inversion may require the evaluation of thousands of possible model runs whose sophistication and complexity may evolve with the magnitude of data considered. As a result, we are testing the use of sensitivity analyses to better identify critical uncertain variables, lower order surrogate models to streamline computational costs, and value of information analyses to better assess optimal use of related data. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL
NASA Astrophysics Data System (ADS)
Altarelli, F.; Monasson, R.; Zamponi, F.
2008-01-01
We study the performances of stochastic heuristic search algorithms on Uniquely Extendible Constraint Satisfaction Problems with random inputs. We show that, for any heuristic preserving the Poissonian nature of the underlying instance, the (heuristic-dependent) largest ratio αa of constraints per variables for which a search algorithm is likely to find solutions is smaller than the critical ratio αd above which solutions are clustered and highly correlated. In addition we show that the clustering ratio can be reached when the number k of variables per constraints goes to infinity by the so-called Generalized Unit Clause heuristic.
Witteveen, Jeroen A.S. Bijl, Hester
2009-10-01
The Unsteady Adaptive Stochastic Finite Elements (UASFE) method resolves the effect of randomness in numerical simulations of single-mode aeroelastic responses with a constant accuracy in time for a constant number of samples. In this paper, the UASFE framework is extended to multi-frequency responses and continuous structures by employing a wavelet decomposition pre-processing step to decompose the sampled multi-frequency signals into single-frequency components. The effect of the randomness on the multi-frequency response is then obtained by summing the results of the UASFE interpolation at constant phase for the different frequency components. Results for multi-frequency responses and continuous structures show a three orders of magnitude reduction of computational costs compared to crude Monte Carlo simulations in a harmonically forced oscillator, a flutter panel problem, and the three-dimensional transonic AGARD 445.6 wing aeroelastic benchmark subject to random fields and random parameters with various probability distributions.
Analytic and Algorithmic Solution of Random Satisfiability Problems
NASA Astrophysics Data System (ADS)
Mézard, M.; Parisi, G.; Zecchina, R.
2002-08-01
We study the satisfiability of random Boolean expressions built from many clauses with K variables per clause (K-satisfiability). Expressions with a ratio α of clauses to variables less than a threshold αc are almost always satisfiable, whereas those with a ratio above this threshold are almost always unsatisfiable. We show the existence of an intermediate phase below αc, where the proliferation of metastable states is responsible for the onset of complexity in search algorithms. We introduce a class of optimization algorithms that can deal with these metastable states; one such algorithm has been tested successfully on the largest existing benchmark of K-satisfiability.
Dynamic Reconfiguration and Routing Algorithms for IP-Over-WDM Networks With Stochastic Traffic
NASA Astrophysics Data System (ADS)
Brzezinski, Andrew; Modiano, Eytan
2005-10-01
We develop algorithms for joint IP-layer routing and WDM logical topology reconfiguration in IP-over-WDM networks experiencing stochastic traffic. At the wavelenght division multiplexing (WDM) layer, we associate a nonnegligible overhead with WDM reconfiguration, during which time tuned transceivers cannot service backlogged data. The Internet Protocol (IP) layer is modeled as a queueing system. We demonstrate that the proposed algorithms achieve asymptotic throughput optimality by using frame-based maximum weight scheduling decisions. We study both fixed and variable frame durations. In addition to dynamically triggering WDM reconfiguration, our algorithms specify precisely how to route packets over the IP layer during the phases in which the WDM layer remains fixed. We demonstrate that optical-layer constraints do not affect the results, and provide an analysis of the specific case of WDM networks with multiple ports per node. In order to gauge the delay properties of our algorithms, we conduct a simulation study and demonstrate an important tradeoff between WDM reconfiguration and IP-layer routing. We find that multihop routing is extremely beneficial at low-throughput levels, while single-hop routing achieves improved delay at high-throughput levels. For a simple access network, we demonstrate through simulation the benefit of employing multihop IP-layer routes.
Simple-random-sampling-based multiclass text classification algorithm.
Liu, Wuying; Wang, Lin; Yi, Mianzhu
2014-01-01
Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.
Stochastic models: theory and simulation.
Field, Richard V., Jr.
2008-03-01
Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
Chen, Zheng; Liu, Liu; Mu, Lin
2017-05-03
In this paper, we consider the linear transport equation under diffusive scaling and with random inputs. The method is based on the generalized polynomial chaos approach in the stochastic Galerkin framework. Several theoretical aspects will be addressed. Additionally, a uniform numerical stability with respect to the Knudsen number ϵ, and a uniform in ϵ error estimate is given. For temporal and spatial discretizations, we apply the implicit–explicit scheme under the micro–macro decomposition framework and the discontinuous Galerkin method, as proposed in Jang et al. (SIAM J Numer Anal 52:2048–2072, 2014) for deterministic problem. Lastly, we provide a rigorous proof ofmore » the stochastic asymptotic-preserving (sAP) property. Extensive numerical experiments that validate the accuracy and sAP of the method are conducted.« less
NASA Astrophysics Data System (ADS)
Zhu, Jianqu; Jin, Weidong; Guo, Feng
2017-04-01
The stochastic resonance (SR) behavior for a linear oscillator with two kinds of fractional derivatives and random frequency is investigated. Based on linear system theory, and applying with the definition of the Gamma function and fractional derivatives, we derive the expression for the output amplitude gain (OAG). A stochastic multiresonance is found on the OAG curve versus the first kind of fractional derivative exponent. The SR occurs on the OAG as a function of the second kind of fractional exponent, as a function of the viscous damping and the friction coefficients, and as a function of the system's frequency. The bona fide SR also takes place on the OAG curve versus the driving frequency.
NASA Astrophysics Data System (ADS)
Sabelfeld, K. K.; Kireeva, A. E.
2017-01-01
This paper describes the stochastic models of electron-hole recombination in inhomogeneous semiconductors in two-dimensional and three-dimensional cases, which were developed on the basis of discrete (cellular automation) and continuous (Monte Carlo method) approaches. The mathematical model of electron-hole recombination, constructed on the basis of a system of spatially inhomogeneous nonlinear integro-differential Smoluchowski equations, is illustrated. The continuous algorithm of the Monte Carlo method and the discrete cellular automation algorithm used for the simulation of particle recombination in semiconductors are shown.
Monotonic continuous-time random walks with drift and stochastic reset events
NASA Astrophysics Data System (ADS)
Montero, Miquel; Villarroel, Javier
2013-01-01
In this paper we consider a stochastic process that may experience random reset events which suddenly bring the system to the starting value and analyze the relevant statistical magnitudes. We focus our attention on monotonic continuous-time random walks with a constant drift: The process increases between the reset events, either by the effect of the random jumps, or by the action of the deterministic drift. As a result of all these combined factors interesting properties emerge, like the existence (for any drift strength) of a stationary transition probability density function, or the faculty of the model to reproduce power-law-like behavior. General formulas for two extreme statistics, the survival probability, and the mean exit time are also derived. To corroborate in an independent way the results of the paper, Monte Carlo methods were used. These numerical estimations are in full agreement with the analytical predictions.
Bridges for Pedestrians with Random Parameters using the Stochastic Finite Elements Analysis
NASA Astrophysics Data System (ADS)
Szafran, J.; Kamiński, M.
2017-02-01
The main aim of this paper is to present a Stochastic Finite Element Method analysis with reference to principal design parameters of bridges for pedestrians: eigenfrequency and deflection of bridge span. They are considered with respect to random thickness of plates in boxed-section bridge platform, Young modulus of structural steel and static load resulting from crowd of pedestrians. The influence of the quality of the numerical model in the context of traditional FEM is shown also on the example of a simple steel shield. Steel structures with random parameters are discretized in exactly the same way as for the needs of traditional Finite Element Method. Its probabilistic version is provided thanks to the Response Function Method, where several numerical tests with random parameter values varying around its mean value enable the determination of the structural response and, thanks to the Least Squares Method, its final probabilistic moments.
NASA Astrophysics Data System (ADS)
Albert, J.
2016-12-01
Stochastic simulation of reaction networks is limited by two factors: accuracy and time. The Gillespie algorithm (GA) is a Monte Carlo-type method for constructing probability distribution functions (pdf) from statistical ensembles. Its accuracy is therefore a function of the computing time. The chemical master equation (CME) is a more direct route to obtaining the pdfs, however, solving the CME is generally very difficult for large networks. We propose a method that combines both approaches in order to simulate stochastically a part of a network. The network is first divided into two parts: A and B. Part A is simulated using the GA, while the solution of the CME for part B, with initial conditions imposed by simulation results of part A, is fed back into the GA. This cycle is then repeated a desired number of times. The advantage of this synergy between the two approaches is: 1) the GA needs to simulate only a part of the whole network, and hence is faster, and 2) the CME is necessarily simpler to solve, as the part of the network it describes is smaller. We will demonstrate on two examples - a positive feedback (genetic switch) and oscillations driven by a negative feedback - the utility of this approach.
A new stochastic algorithm for proton exchange membrane fuel cell stack design optimization
NASA Astrophysics Data System (ADS)
Chakraborty, Uttara
2012-10-01
This paper develops a new stochastic heuristic for proton exchange membrane fuel cell stack design optimization. The problem involves finding the optimal size and configuration of stand-alone, fuel-cell-based power supply systems: the stack is to be configured so that it delivers the maximum power output at the load's operating voltage. The problem apparently looks straightforward but is analytically intractable and computationally hard. No exact solution can be found, nor is it easy to find the exact number of local optima; we, therefore, are forced to settle with approximate or near-optimal solutions. This real-world problem, first reported in Journal of Power Sources 131, poses both engineering challenges and computational challenges and is representative of many of today's open problems in fuel cell design involving a mix of discrete and continuous parameters. The new algorithm is compared against genetic algorithm, simulated annealing, and (1+1)-EA. Statistical tests of significance show that the results produced by our method are better than the best-known solutions for this problem published in the literature. A finite Markov chain analysis of the new algorithm establishes an upper bound on the expected time to find the optimum solution.
A new algorithm for calculating the curvature perturbations in stochastic inflation
Fujita, Tomohiro; Kawasaki, Masahiro; Tada, Yuichiro; Takesako, Tomohiro E-mail: kawasaki@icrr.u-tokyo.ac.jp E-mail: takesako@icrr.u-tokyo.ac.jp
2013-12-01
We propose a new approach for calculating the curvature perturbations produced during inflation in the stochastic formalism. In our formalism, the fluctuations of the e-foldings are directly calculated without perturbatively expanding the inflaton field and they are connected to the curvature perturbations by the δN formalism. The result automatically includes the contributions of the higher order perturbations because we solve the equation of motion non-perturbatively. In this paper, we analytically prove that our result (the power spectrum and the nonlinearity parameter) is consistent with the standard result in single field slow-roll inflation. We also describe the algorithm for numerical calculations of the curvature perturbations in more general inflation models.
NASA Astrophysics Data System (ADS)
Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.
2012-05-01
In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.
Bao, Haibo; Cao, Jinde
2011-01-01
This paper is concerned with the state estimation problem for a class of discrete-time stochastic neural networks (DSNNs) with random delays. The effect of both variation range and distribution probability of the time delay are taken into account in the proposed approach. The stochastic disturbances are described in terms of a Brownian motion and the time-varying delay is characterized by introducing a Bernoulli stochastic variable. By employing a Lyapunov-Krasovskii functional, sufficient delay-distribution-dependent conditions are established in terms of linear matrix inequalities (LMIs) that guarantee the existence of the state estimator which can be checked readily by the Matlab toolbox. The main feature of the results obtained in this paper is that they are dependent on not only the bound but also the distribution probability of the time delay, and we obtain a larger allowance variation range of the delay, hence our results are less conservative than the traditional delay-independent ones. One example is given to illustrate the effectiveness of the proposed result. Copyright © 2010 Elsevier Ltd. All rights reserved.
Theory of weak scattering of stochastic electromagnetic fields from deterministic and random media
Tong Zhisong; Korotkova, Olga
2010-09-15
The theory of scattering of scalar stochastic fields from deterministic and random media is generalized to the electromagnetic domain under the first-order Born approximation. The analysis allows for determining the changes in spectrum, coherence, and polarization of electromagnetic fields produced on their propagation from the source to the scattering volume, interaction with the scatterer, and propagation from the scatterer to the far field. An example of scattering of a field produced by a {delta}-correlated partially polarized source and scattered from a {delta}-correlated medium is provided.
Single realization stochastic FDTD for weak scattering waves in biological random media.
Tan, Tengmeng; Taflove, Allen; Backman, Vadim
2013-02-01
This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result.
NASA Astrophysics Data System (ADS)
Guo, Feng; Zhu, Cheng-Yin; Cheng, Xiao-Feng; Li, Heng
2016-10-01
Stochastic resonance in a fractional harmonic oscillator with random mass and signal-modulated noise is investigated. Applying linear system theory and the characteristics of the noises, the analysis expression of the mean output-amplitude-gain (OAG) is obtained. It is shown that the OAG varies non-monotonically with the increase of the intensity of the multiplicative dichotomous noise, with the increase of the frequency of the driving force, as well as with the increase of the system frequency. In addition, the OAG is a non-monotonic function of the system friction coefficient, as a function of the viscous damping coefficient, as a function of the fractional exponent.
Dynamics of the stochastic Leslie-Gower predator-prey system with randomized intrinsic growth rate
NASA Astrophysics Data System (ADS)
Zhao, Dianli; Yuan, Sanling
2016-11-01
This paper investigates the stochastic Leslie-Gower predator-prey system with randomized intrinsic growth rate. Existence of a unique global positive solution is proved firstly. Then we obtain the sufficient conditions for permanence in mean and almost sure extinction of the system. Furthermore, the stationary distribution is derived based on the positive equilibrium of the deterministic model, which shows the population is not only persistent but also convergent by time average under some assumptions. Finally, we illustrate our conclusions through two examples.
Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J
2016-12-01
This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.
Stochastic characterization of phase detection algorithms in phase-shifting interferometry
Munteanu, Florin
2016-11-01
Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here,more » we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.« less
Stochastic characterization of phase detection algorithms in phase-shifting interferometry
Munteanu, Florin
2016-11-01
Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here, we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.
Dynamics of asynchronous random Boolean networks with asynchrony generated by stochastic processes.
Deng, Xutao; Geng, Huimin; Matache, Mihaela Teodora
2007-03-01
An asynchronous Boolean network with N nodes whose states at each time point are determined by certain parent nodes is considered. We make use of the models developed by Matache and Heidel [Matache, M.T., Heidel, J., 2005. Asynchronous random Boolean network model based on elementary cellular automata rule 126. Phys. Rev. E 71, 026232] for a constant number of parents, and Matache [Matache, M.T., 2006. Asynchronous random Boolean network model with variable number of parents based on elementary cellular automata rule 126. IJMPB 20 (8), 897-923] for a varying number of parents. In both these papers the authors consider an asynchronous updating of all nodes, with asynchrony generated by various random distributions. We supplement those results by using various stochastic processes as generators for the number of nodes to be updated at each time point. In this paper we use the following stochastic processes: Poisson process, random walk, birth and death process, Brownian motion, and fractional Brownian motion. We study the dynamics of the model through sensitivity of the orbits to initial values, bifurcation diagrams, and fixed-point analysis. The dynamics of the system show that the number of nodes to be updated at each time point is of great importance, especially for the random walk, the birth and death, and the Brownian motion processes. Small or moderate values for the number of updated nodes generate order, while large values may generate chaos depending on the underlying parameters. The Poisson process generates order. With fractional Brownian motion, as the values of the Hurst parameter increase, the system exhibits order for a wider range of combinations of the underlying parameters.
Macnamara, Shev; Bersani, Alberto M; Burrage, Kevin; Sidje, Roger B
2008-09-07
Recently the application of the quasi-steady-state approximation (QSSA) to the stochastic simulation algorithm (SSA) was suggested for the purpose of speeding up stochastic simulations of chemical systems that involve both relatively fast and slow chemical reactions [Rao and Arkin, J. Chem. Phys. 118, 4999 (2003)] and further work has led to the nested and slow-scale SSA. Improved numerical efficiency is obtained by respecting the vastly different time scales characterizing the system and then by advancing only the slow reactions exactly, based on a suitable approximation to the fast reactions. We considerably extend these works by applying the QSSA to numerical methods for the direct solution of the chemical master equation (CME) and, in particular, to the finite state projection algorithm [Munsky and Khammash, J. Chem. Phys. 124, 044104 (2006)], in conjunction with Krylov methods. In addition, we point out some important connections to the literature on the (deterministic) total QSSA (tQSSA) and place the stochastic analogue of the QSSA within the more general framework of aggregation of Markov processes. We demonstrate the new methods on four examples: Michaelis-Menten enzyme kinetics, double phosphorylation, the Goldbeter-Koshland switch, and the mitogen activated protein kinase cascade. Overall, we report dramatic improvements by applying the tQSSA to the CME solver.
MOQA min-max heapify: A randomness preserving algorithm
NASA Astrophysics Data System (ADS)
Gao, Ang; Hennessy, Aoife; Schellekens, Michel
2012-09-01
MOQA is a high-level data structuring language, designed to allow for modular static timing analysis [1, 2, 3]. In essence,MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The modularity property brings a strong advantage for the programmer. The capacity to combine parts of code, where the average-time is simply the sum of the times of the parts, is a very helpful advantage in static analysis, something which is not available in current languages. Modularity also improves precision of average-case analysis, supporting the determination of accurate estimates on the average number of basic operations ofMOQA programs. The mathematical theory underpinning this approach is that of random structures and their preservation. Applying any MOQA operation to all elements of a random structure results in an output isomorphic to one or more random structures, which is the key to systematic timing. Here we introduce the approach in a self contained way and provide a MOQA version of the well-known algorithm of Min-Max heapify, constructed with the MOQA product operation. We demonstrate the "randomness preservation" property of the algorithm and illustrate the applicability of our method by deriving the exact average time of the algorithm.
NASA Astrophysics Data System (ADS)
Xu, Lei; Zhai, Wanming
2017-10-01
This paper devotes to develop a computational model for stochastic analysis and reliability assessment of vehicle-track systems subject to earthquakes and track random irregularities. In this model, the earthquake is expressed as non-stationary random process simulated by spectral representation and random function, and the track random irregularities with ergodic properties on amplitudes, wavelengths and probabilities are characterized by a track irregularity probabilistic model, and then the number theoretical method (NTM) is applied to effectively select representative samples of earthquakes and track random irregularities. Furthermore, a vehicle-track coupled model is presented to obtain the dynamic responses of vehicle-track systems due to the earthquakes and track random irregularities at time-domain, and the probability density evolution method (PDEM) is introduced to describe the evolutionary process of probability from excitation input to response output by assuming the vehicle-track system as a probabilistic conservative system, which lays the foundation on reliability assessment of vehicle-track systems. The effectiveness of the proposed model is validated by comparing to the results of Monte-Carlo method from statistical viewpoint. As an illustrative example, the random vibrations of a high-speed railway vehicle running on the track slabs excited by lateral seismic waves and track random irregularities are analyzed, from which some significant conclusions can be drawn, e.g., track irregularities will additionally promote the dynamic influence of earthquakes especially on maximum values and dispersion degree of responses; the characteristic frequencies or frequency ranges respectively governed by earthquakes and track random irregularities are greatly different, moreover, the lateral seismic waves will dominate or even change the characteristic frequencies of system responses of some lateral dynamic indices at low frequency.
Hermann, Philipp; Mrkvička, Tomáš; Mattfeldt, Torsten; Minárová, Mária; Helisová, Kateřina; Nicolis, Orietta; Wartner, Fabian; Stehlík, Milan
2015-08-15
Fractals are models of natural processes with many applications in medicine. The recent studies in medicine show that fractals can be applied for cancer detection and the description of pathological architecture of tumors. This fact is not surprising, as due to the irregular structure, cancerous cells can be interpreted as fractals. Inspired by Sierpinski carpet, we introduce a flexible parametric model of random carpets. Randomization is introduced by usage of binomial random variables. We provide an algorithm for estimation of parameters of the model and illustrate theoretical and practical issues in generation of Sierpinski gaskets and Hausdorff measure calculations. Stochastic geometry models can also serve as models for binary cancer images. Recently, a Boolean model was applied on the 200 images of mammary cancer tissue and 200 images of mastopathic tissue. Here, we describe the Quermass-interaction process, which can handle much more variations in the cancer data, and we apply it to the images. It was found out that mastopathic tissue deviates significantly stronger from Quermass-interaction process, which describes interactions among particles, than mammary cancer tissue does. The Quermass-interaction process serves as a model describing the tissue, which structure is broken to a certain level. However, random fractal model fits well for mastopathic tissue. We provide a novel discrimination method between mastopathic and mammary cancer tissue on the basis of complex wavelet-based self-similarity measure with classification rates more than 80%. Such similarity measure relates to Hurst exponent and fractional Brownian motions. The R package FractalParameterEstimation is developed and introduced in the paper.
Wu, Zhizhang Huang, Zhongyi
2016-07-15
In this paper, we consider the numerical solution of the one-dimensional Schrödinger equation with a periodic lattice potential and a random external potential. This is an important model in solid state physics where the randomness results from complicated phenomena that are not exactly known. Here we generalize the Bloch decomposition-based time-splitting pseudospectral method to the stochastic setting using the generalized polynomial chaos with a Galerkin procedure so that the main effects of dispersion and periodic potential are still computed together. We prove that our method is unconditionally stable and numerical examples show that it has other nice properties and is more efficient than the traditional method. Finally, we give some numerical evidence for the well-known phenomenon of Anderson localization.
Neuhauser, Daniel; Rabani, Eran; Baer, Roi
2013-04-04
A fast method is developed for calculating the random phase approximation (RPA) correlation energy for density functional theory. The correlation energy is given by a trace over a projected RPA response matrix, and the trace is taken by a stochastic approach using random perturbation vectors. For a fixed statistical error in the total energy per electron, the method scales, at most, quadratically with the system size; however, in practice, due to self-averaging, it requires less statistical sampling as the system grows, and the performance is close to linear scaling. We demonstrate the method by calculating the RPA correlation energy for cadmium selenide and silicon nanocrystals with over 1500 electrons. We find that the RPA correlation energies per electron are largely independent of the nanocrystal size. In addition, we show that a correlated sampling technique enables calculation of the energy difference between two slightly distorted configurations with scaling and a statistical error similar to that of the total energy per electron.
Wang, Xin-Fan; Wang, Jian-Qiang; Deng, Sheng-Yue
2013-01-01
We investigate the dynamic stochastic multicriteria decision making (SMCDM) problems, in which the criterion values take the form of log-normally distributed random variables, and the argument information is collected from different periods. We propose two new geometric aggregation operators, such as the log-normal distribution weighted geometric (LNDWG) operator and the dynamic log-normal distribution weighted geometric (DLNDWG) operator, and develop a method for dynamic SMCDM with log-normally distributed random variables. This method uses the DLNDWG operator and the LNDWG operator to aggregate the log-normally distributed criterion values, utilizes the entropy model of Shannon to generate the time weight vector, and utilizes the expectation values and variances of log-normal distributions to rank the alternatives and select the best one. Finally, an example is given to illustrate the feasibility and effectiveness of this developed method.
Deng, Haishan; Xie, Shaofei; Xiang, Bingren; Zhan, Ying; Li, Wei; Li, Xiaohua; Jiang, Caiyun; Wu, Xiaohong; Liu, Dan
2014-01-01
Simultaneous determination of multiple weak chromatographic peaks via stochastic resonance algorithm attracts much attention in recent years. However, the optimization of the parameters is complicated and time consuming, although the single-well potential stochastic resonance algorithm (SSRA) has already reduced the number of parameters to only one and simplified the process significantly. Even worse, it is often difficult to keep amplified peaks with beautiful peak shape. Therefore, multiobjective genetic algorithm was employed to optimize the parameter of SSRA for multiple optimization objectives (i.e., S/N and peak shape) and multiple chromatographic peaks. The applicability of the proposed method was evaluated with an experimental data set of Sudan dyes, and the results showed an excellent quantitative relationship between different concentrations and responses.
Horng, Shih-Cheng; Lin, Shin-Yeu; Lee, Loo Hay; Chen, Chun-Hung
2013-10-01
A three-phase memetic algorithm (MA) is proposed to find a suboptimal solution for real-time combinatorial stochastic simulation optimization (CSSO) problems with large discrete solution space. In phase 1, a genetic algorithm assisted by an offline global surrogate model is applied to find N good diversified solutions. In phase 2, a probabilistic local search method integrated with an online surrogate model is used to search for the approximate corresponding local optimum of each of the N solutions resulted from phase 1. In phase 3, the optimal computing budget allocation technique is employed to simulate and identify the best solution among the N local optima from phase 2. The proposed MA is applied to an assemble-to-order problem, which is a real-world CSSO problem. Extensive simulations were performed to demonstrate its superior performance, and results showed that the obtained solution is within 1% of the true optimum with a probability of 99%. We also provide a rigorous analysis to evaluate the performance of the proposed MA.
A matrix product algorithm for stochastic dynamics on locally tree-like graphs
NASA Astrophysics Data System (ADS)
Barthel, Thomas; de Bacco, Caterina; Franz, Silvio
In this talk, I describe a novel algorithm for the efficient simulation of generic stochastic dynamics of classical degrees of freedom defined on the vertices of locally tree-like graphs. Such models correspond for example to spin-glass systems, Boolean networks, neural networks, or other technological, biological, and social networks. Building upon the cavity method and ideas from quantum many-body theory, the algorithm is based on a matrix product approximation of the so-called edge messages - conditional probabilities of vertex variable trajectories. The matrix product edge messages (MPEM) are constructed recursively. Computation costs and accuracy can be tuned by controlling the matrix dimensions of the MPEM in truncations. In contrast to Monte Carlo simulations, the approach has a better error scaling and works for both, single instances as well as the thermodynamic limit. Due to the absence of cancellation effects, observables with small expectation values can be evaluated accurately, allowing for the study of decay processes and temporal correlations with unprecedented accuracy. The method is demonstrated for the prototypical non-equilibrium Glauber dynamics of an Ising spin system. Reference: arXiv:1508.03295.
Roberts, William M; Augustine, Steven B; Lawton, Kristy J; Lindsay, Theodore H; Thiele, Tod R; Izquierdo, Eduardo J; Faumont, Serge; Lindsay, Rebecca A; Britton, Matthew Cale; Pokala, Navin; Bargmann, Cornelia I; Lockery, Shawn R
2016-01-01
Random search is a behavioral strategy used by organisms from bacteria to humans to locate food that is randomly distributed and undetectable at a distance. We investigated this behavior in the nematode Caenorhabditis elegans, an organism with a small, well-described nervous system. Here we formulate a mathematical model of random search abstracted from the C. elegans connectome and fit to a large-scale kinematic analysis of C. elegans behavior at submicron resolution. The model predicts behavioral effects of neuronal ablations and genetic perturbations, as well as unexpected aspects of wild type behavior. The predictive success of the model indicates that random search in C. elegans can be understood in terms of a neuronal flip-flop circuit involving reciprocal inhibition between two populations of stochastic neurons. Our findings establish a unified theoretical framework for understanding C. elegans locomotion and a testable neuronal model of random search that can be applied to other organisms. DOI: http://dx.doi.org/10.7554/eLife.12572.001 PMID:26824391
NASA Astrophysics Data System (ADS)
Rangarajan, Nikhil; Parthasarathy, Arun; Rakheja, Shaloo
2017-06-01
In this paper, we propose a spin-based true random number generator (TRNG) that uses the inherent stochasticity in nanomagnets as the source of entropy. In contrast to previous works on spin-based TRNGs, we focus on the precessional switching strategy in nanomagnets to generate a truly random sequence. Using the NIST SP 800-22 test suite for randomness, we demonstrate that the output of the proposed TRNG circuit is statistically random with 99% confidence levels. The effects of process and temperature variability on the device are studied and shown to have no effect on the quality of randomness of the device. To benchmark the performance of the TRNG in terms of area, throughput, and power, we use SPICE (Simulation Program with Integrated Circuit Emphasis)-based models of the nanomagnet and combine them with CMOS device models at the 45 nm technology node. The throughput, power, and area footprints of the proposed TRNG are shown to be better than those of existing state-of-the-art TRNGs. We identify the optimal material and geometrical parameters of the nanomagnet to minimize the energy per bit at a given throughput of the TRNG circuit. Our results provide insights into the device-level modifications that can yield significant system-level improvements. Overall, the proposed spin-based TRNG circuit shows significant robustness, reliability, and fidelity and, therefore, has a potential for on-chip implementation.
A biased random-key genetic algorithm for data clustering.
Festa, P
2013-09-01
Cluster analysis aims at finding subsets (clusters) of a given set of entities, which are homogeneous and/or well separated. Starting from the 1990s, cluster analysis has been applied to several domains with numerous applications. It has emerged as one of the most exciting interdisciplinary fields, having benefited from concepts and theoretical results obtained by different scientific research communities, including genetics, biology, biochemistry, mathematics, and computer science. The last decade has brought several new algorithms, which are able to solve larger sized and real-world instances. We will give an overview of the main types of clustering and criteria for homogeneity or separation. Solution techniques are discussed, with special emphasis on the combinatorial optimization perspective, with the goal of providing conceptual insights and literature references to the broad community of clustering practitioners. A new biased random-key genetic algorithm is also described and compared with several efficient hybrid GRASP algorithms recently proposed to cluster biological data.
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
Baran, Andrea
2016-01-01
Mean-shift is an iterative procedure often used as a nonparametric clustering algorithm that defines clusters based on the modal regions of a density function. The algorithm is conceptually appealing and makes assumptions neither about the shape of the clusters nor about their number. However, with a complexity of O(n2) per iteration, it does not scale well to large data sets. We propose a novel algorithm which performs density-based clustering much quicker than mean-shift, yet delivering virtually identical results. This algorithm combines subsampling and a stochastic approximation procedure to achieve a potential complexity of O(n) at each step. Its convergence is established. Its performances are evaluated using simulations and applications to image segmentation, where the algorithm was tens or hundreds of times faster than mean-shift, yet causing negligible amounts of clustering errors. The algorithm can be combined with existing approaches to further accelerate clustering. PMID:28479847
A random forest algorithm for nowcasting of intense precipitation events
NASA Astrophysics Data System (ADS)
Das, Saurabh; Chakraborty, Rohit; Maitra, Animesh
2017-09-01
Automatic nowcasting of convective initiation and thunderstorms has potential applications in several sectors including aviation planning and disaster management. In this paper, random forest based machine learning algorithm is tested for nowcasting of convective rain with a ground based radiometer. Brightness temperatures measured at 14 frequencies (7 frequencies in 22-31 GHz band and 7 frequencies in 51-58 GHz bands) are utilized as the inputs of the model. The lower frequency band is associated to the water vapor absorption whereas the upper frequency band relates to the oxygen absorption and hence, provide information on the temperature and humidity of the atmosphere. Synthetic minority over-sampling technique is used to balance the data set and 10-fold cross validation is used to assess the performance of the model. Results indicate that random forest algorithm with fixed alarm generation time of 30 min and 60 min performs quite well (probability of detection of all types of weather condition ∼90%) with low false alarms. It is, however, also observed that reducing the alarm generation time improves the threat score significantly and also decreases false alarms. The proposed model is found to be very sensitive to the boundary layer instability as indicated by the variable importance measure. The study shows the suitability of a random forest algorithm for nowcasting application utilizing a large number of input parameters from diverse sources and can be utilized in other forecasting problems.
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
2D stochastic-integral models for characterizing random grain noise in titanium alloys
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Cherry, Matthew; Pilchak, Adam; Knopp, Jeremy S.; Blodgett, Mark P.
2014-02-18
We extend our previous work, in which we applied high-dimensional model representation (HDMR) and analysis of variance (ANOVA) concepts to the characterization of a metallic surface that has undergone a shot-peening treatment to reduce residual stresses, and has, therefore, become a random conductivity field. That example was treated as a onedimensional problem, because those were the only data available. In this study, we develop a more rigorous two-dimensional model for characterizing random, anisotropic grain noise in titanium alloys. Such a model is necessary if we are to accurately capture the 'clumping' of crystallites into long chains that appear during the processing of the metal into a finished product. The mathematical model starts with an application of the Karhunen-Loève (K-L) expansion for the random Euler angles, θ and φ, that characterize the orientation of each crystallite in the sample. The random orientation of each crystallite then defines the stochastic nature of the electrical conductivity tensor of the metal. We study two possible covariances, Gaussian and double-exponential, which are the kernel of the K-L integral equation, and find that the double-exponential appears to satisfy measurements more closely of the two. Results based on data from a Ti-7Al sample will be given, and further applications of HDMR and ANOVA will be discussed.
NASA Astrophysics Data System (ADS)
Mathelin, Lionel; Desceliers, Christophe; Hussaini, M. Yousuff
2011-06-01
This paper is concerned with the estimation of a parametric probabilistic model of the random displacement source field at the origin of seaquakes in a given region. The observation of the physical effects induced by statistically independent realizations of the seaquake random process is inherent with uncertainty in the measurements and a stochastic inverse method is proposed to identify each realization of the source field. A statistical reduction is performed to drastically lower the dimension of the space in which the random field is sought and one is left with a random vector to identify. An approximation of the vector components is determined using a polynomial chaos decomposition, solution of an optimality system to identify an optimal representation. A second order gradient-based optimization technique is used to efficiently estimate this statistical representation of the unknown source while accounting for the non-linear constraints in the model parameters. This methodology allows the uncertainty associated with the estimates to be quantified and avoids the need for repeatedly solving the forward model.
NASA Astrophysics Data System (ADS)
Sepahvand, K.
2017-07-01
Damping parameters of fiber-reinforced composite possess significant uncertainty due to the structural complexity of such materials. Considering the parameters as random variables, this paper uses the generalized polynomial chaos (gPC) expansion to capture the uncertainty in the damping and frequency response function of composite plate structures. A spectral stochastic finite element formulation for damped vibration analysis of laminate plates is employed. Experimental modal data for samples of plates is used to identify and realize the range and probability distributions of uncertain damping parameters. The constructed gPC expansions for the uncertain parameters are used as inputs to a deterministic finite element model to realize random frequency responses on a few numbers of collocation points generated in random space. The realizations then are employed to estimate the unknown deterministic functions of the gPC expansion approximating the responses. Employing modal superposition method to solve harmonic analysis problem yields an efficient sparse gPC expansion representing the responses. The results show while the responses are influenced by the damping uncertainties at the mid and high frequency ranges, the impact in low frequency modes can be safely ignored. Utilizing a few random collocation points, the method indicates also a very good agreement compared to the sampling-based Monte Carlo simulations with large number of realizations. As the deterministic finite element model serves as black-box solver, the procedure can be efficiently adopted to complex structural systems with uncertain parameters in terms of computational time.
A stochastic model for adhesion-mediated cell random motility and haptotaxis.
Dickinson, R B; Tranquillo, R T
1993-01-01
The active migration of blood and tissue cells is important in a number of physiological processes including inflammation, wound healing, embryogenesis, and tumor cell metastasis. These cells move by transmitting cytoplasmic force through membrane receptors which are bound specifically to adhesion ligands in the surrounding substratum. Recently, much research has focused on the influence of the composition of extracellular matrix and the distribution of its components on the speed and direction of cell migration. It is commonly believed that the magnitude of the adhesion influences cell speed and/or random turning behavior, whereas a gradient of adhesion may bias the net direction of the cell movement, a phenomenon known as haptotaxis. The mechanisms underlying these responses are presently not understood. A stochastic model is presented to provide a mechanistic understanding of how the magnitude and distribution of adhesion ligands in the substratum influence cell movement. The receptor-mediated cell migration is modeled as an interrelation of random processes on distinct time scales. Adhesion receptors undergo rapid binding and transport, resulting in a stochastic spatial distribution of bound receptors fluctuating about some mean distribution. This results in a fluctuating spatio-temporal pattern of forces on the cell, which in turn affects the speed and turning behavior on a longer time scale. The model equations are a system of nonlinear stochastic differential equations (SDE's) which govern the time evolution of the spatial distribution of bound and free receptors, and the orientation and position of the cell. These SDE's are integrated numerically to simulate the behavior of the model cell on both a uniform substratum, and on a gradient of adhesion ligand concentration. Furthermore, analysis of the governing SDE system and corresponding Fokker-Planck equation (FPE) yields analytical expressions for indices which characterize cell movement on multiple time
NASA Astrophysics Data System (ADS)
Liu, Yurong; Alsaadi, Fuad E.; Yin, Xiaozhou; Wang, Yamin
2015-02-01
In this paper, we are concerned with the robust H∞ filtering problem for a class of nonlinear discrete time-delay stochastic systems. The system under consideration involves parameter uncertainties, stochastic disturbances, time-varying delays and sector nonlinearities. Both missing measurements and randomly occurring nonlinearities are described via the binary switching sequences satisfying a conditional probability distribution, and the nonlinearities are assumed to be sector bounded. The problem addressed is the design of a full-order filter such that, for all admissible uncertainties, nonlinearities and time-delays, the dynamics of the filtering error is constrained to be robustly exponentially stable in the mean square, and a prescribed ? disturbance rejection attenuation level is also guaranteed. By using the Lyapunov stability theory and some new techniques, sufficient conditions are first established to ensure the existence of the desired filtering parameters. Then, the explicit expression of the desired filter gains is described in terms of the solution to a linear matrix inequality. Finally, a numerical example is exploited to show the usefulness of the results derived.
Combinatorial approximation algorithms for MAXCUT using random walks.
Seshadhri, Comandur; Kale, Satyen
2010-11-01
We give the first combinatorial approximation algorithm for MaxCut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an {tilde O}(n{sup b}) algorithm that outputs a (0.5 + {delta})-approximation for MaxCut, where {delta} = {delta}(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter {phi}, unless a random walk of length {ell} = O(log n) starting from i mixes rapidly (in terms of {phi} and {ell}), we can find a cut of conductance at most {phi} close to the vertex. The work done per vertex found in the cut is sublinear in n.
Random Matrix Approach to Quantum Adiabatic Evolution Algorithms
NASA Technical Reports Server (NTRS)
Boulatov, Alexei; Smelyanskiy, Vadier N.
2004-01-01
We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.
Albert, Jaroslav
2016-01-01
Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology--the gene switch and the Griffith model of a genetic oscillator--and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them.
Producing a functional eukaryotic messenger RNA (mRNA) requires the coordinated activity of several large protein complexes to initiate transcription, elongate nascent transcripts, splice together exons, and cleave and polyadenylate the 3’ end. Kinetic competition between these various processes has been proposed to regulate mRNA maturation, but this model could lead to multiple, randomly determined, or stochastic, pathways or outcomes. Regulatory checkpoints have been suggested as a means of ensuring quality control. However, current methods have been unable to tease apart the contributions of these processes at a single gene or on a time scale that could provide mechanistic insight. To begin to investigate the kinetic relationship between transcription and splicing, Daniel Larson, Ph.D., of CCR’s Laboratory of Receptor Biology and Gene Expression, and his colleagues employed a single-molecule RNA imaging approach to monitor production and processing of a human β-globin reporter gene in living cells.
A stochastic control approach to Slotted-ALOHA random access protocol
NASA Astrophysics Data System (ADS)
Pietrabissa, Antonio
2013-12-01
ALOHA random access protocols are distributed protocols based on transmission probabilities, that is, each node decides upon packet transmissions according to a transmission probability value. In the literature, ALOHA protocols are analysed by giving necessary and sufficient conditions for the stability of the queues of the node buffers under a control vector (whose elements are the transmission probabilities assigned to the nodes), given an arrival rate vector (whose elements represent the rates of the packets arriving in the node buffers). The innovation of this work is that, given an arrival rate vector, it computes the optimal control vector by defining and solving a stochastic control problem aimed at maximising the overall transmission efficiency, while keeping a grade of fairness among the nodes. Furthermore, a more general case in which the arrival rate vector changes in time is considered. The increased efficiency of the proposed solution with respect to the standard ALOHA approach is evaluated by means of numerical simulations.
Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France
2011-09-20
In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug.
Overgaard, Rune V; Jonsson, Niclas; Tornøe, Christoffer W; Madsen, Henrik
2005-02-01
Pharmacokinetic/pharmacodynamic modelling is most often performed using non-linear mixed-effects models based on ordinary differential equations with uncorrelated intra-individual residuals. More sophisticated residual error models as e.g. stochastic differential equations (SDEs) with measurement noise can in many cases provide a better description of the variations, which could be useful in various aspects of modelling. This general approach enables a decomposition of the intra-individual residual variation epsilon into system noise w and measurement noise e. The present work describes implementation of SDEs in a non-linear mixed-effects model, where parameter estimation was performed by a novel approximation of the likelihood function. This approximation is constructed by combining the First-Order Conditional Estimation (FOCE) method used in non-linear mixed-effects modelling with the Extended Kalman Filter used in models with SDEs. Fundamental issues concerning the proposed model and estimation algorithm are addressed by simulation studies, concluding that system noise can successfully be separated from measurement noise and inter-individual variability.
Zhang, Naigong; Zeng, Chen
2008-08-01
We adapt a combinatorial optimization algorithm, extremal optimization (EO), for the search problem in computational protein design. This algorithm takes advantage of the knowledge of local energy information and systematically improves on the residues that have high local energies. Power-law probability distributions are used to select the backbone sites to be improved on and the rotamer choices to be changed to. We compare this method with simulated annealing (SA) and motivate and present an improved method, which we call reference energy extremal optimization (REEO). REEO uses reference energies to convert a problem with a structured local-energy profile to one with more random profile, and extremal optimization proves to be extremely efficient for the latter problem. We show in detail the large improvement we have achieved using REEO as compared to simulated annealing and discuss a number of other heuristics we have attempted to date. 2008 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Jiang, Daqing; Shi, Ningzhong; Li, Xiaoyue
2008-04-01
This paper discusses a randomized non-autonomous logistic equation , where B(t) is a 1-dimensional standard Brownian motion. In [D.Q. Jiang, N.Z. Shi, A note on non-autonomous logistic equation with random perturbation, J. Math. Anal. Appl. 303 (2005) 164-172], the authors show that E[1/N(t)] has a unique positive T-periodic solution E[1/Np(t)] provided a(t), b(t) and [alpha](t) are continuous T-periodic functions, a(t)>0, b(t)>0 and . We show that this equation is stochastically permanent and the solution Np(t) is globally attractive provided a(t), b(t) and [alpha](t) are continuous T-periodic functions, a(t)>0, b(t)>0 and mint[set membership, variant][0,T]a(t)>maxt[set membership, variant][0,T][alpha]2(t). By the way, the similar results of a generalized non-autonomous logistic equation with random perturbation are yielded.
A stochastic basis to the spatially uniform distribution of randomly generated Ionian paterae
NASA Astrophysics Data System (ADS)
Shoji, D.; Hussmann, H.
2016-10-01
Due to its tidally heated interior, Io is a geologically very active satellite that bears many volcanic features. It is observed that the mean nearest neighbor distance of each volcanic feature, called a patera, is larger than that of a random distribution, which implies that the spatial distribution of paterae is uniform rather than random. However, it is uncertain how the paterae are organized into a uniform distribution. We suggest the mechanism of Io's uniformly distributed paterae considering localized obliteration of old features. Instead of geological modeling, we performed stochastic simulations and statistical analyses for the obliteration of quiescent paterae. Monte Carlo calculations with Gaussian obliteration probability show that if the width of obliteration probability is approximately 80 km and the volcanic generation rate is ˜5.0 × 10-6 km-2 Ma-1, uniform distribution and the observed number density of paterae are attained at the 2σ level on a time scale of approximately 6 Myr. With this generation rate and width of the obliteration probability, the averaged distance of one patera to the nearest patera (mean nearest neighbor distance) is approximately 200 km, which is consistent with the observed value. The uniformity of the distribution is maintained once it is achieved. On regional scales, Io's paterae would naturally evolve from random into uniform distributions by the obliteration of old and quiescent features.
Stochastic optimal foraging: tuning intensive and extensive dynamics in random searches.
Bartumeus, Frederic; Raposo, Ernesto P; Viswanathan, Gandhimohan M; da Luz, Marcos G E
2014-01-01
Recent theoretical developments had laid down the proper mathematical means to understand how the structural complexity of search patterns may improve foraging efficiency. Under information-deprived scenarios and specific landscape configurations, Lévy walks and flights are known to lead to high search efficiencies. Based on a one-dimensional comparative analysis we show a mechanism by which, at random, a searcher can optimize the encounter with close and distant targets. The mechanism consists of combining an optimal diffusivity (optimally enhanced diffusion) with a minimal diffusion constant. In such a way the search dynamics adequately balances the tension between finding close and distant targets, while, at the same time, shifts the optimal balance towards relatively larger close-to-distant target encounter ratios. We find that introducing a multiscale set of reorientations ensures both a thorough local space exploration without oversampling and a fast spreading dynamics at the large scale. Lévy reorientation patterns account for these properties but other reorientation strategies providing similar statistical signatures can mimic or achieve comparable efficiencies. Hence, the present work unveils general mechanisms underlying efficient random search, beyond the Lévy model. Our results suggest that animals could tune key statistical movement properties (e.g. enhanced diffusivity, minimal diffusion constant) to cope with the very general problem of balancing out intensive and extensive random searching. We believe that theoretical developments to mechanistically understand stochastic search strategies, such as the one here proposed, are crucial to develop an empirically verifiable and comprehensive animal foraging theory.
Stochastic Seismic Response of an Algiers Site with Random Depth to Bedrock
Badaoui, M.; Mebarki, A.; Berrah, M. K.
2010-05-21
Among the important effects of the Boumerdes earthquake (Algeria, May 21{sup st} 2003) was that, within the same zone, the destructions in certain parts were more important than in others. This phenomenon is due to site effects which alter the characteristics of seismic motions and cause concentration of damage during earthquakes. Local site effects such as thickness and mechanical properties of soil layers have important effects on the surface ground motions.This paper deals with the effect of the randomness aspect of the depth to bedrock (soil layers heights) which is assumed to be a random variable with lognormal distribution. This distribution is suitable for strictly non-negative random variables with large values of the coefficient of variation. In this case, Monte Carlo simulations are combined with the stiffness matrix method, used herein as a deterministic method, for evaluating the effect of the depth to bedrock uncertainty on the seismic response of a multilayered soil. This study considers a P and SV wave propagation pattern using input accelerations collected at Keddara station, located at 20 km from the epicenter, as it is located directly on the bedrock.A parametric study is conducted do derive the stochastic behavior of the peak ground acceleration and its response spectrum, the transfer function and the amplification factors. It is found that the soil height heterogeneity causes a widening of the frequency content and an increase in the fundamental frequency of the soil profile, indicating that the resonance phenomenon concerns a larger number of structures.
1988-01-01
Two central features of polymorphonuclear leukocyte chemosensory movement behavior demand fundamental theoretical understanding. In uniform concentrations of chemoattractant, these cells exhibit a persistent random walk, with a characteristic "persistence time" between significant changes in direction. In chemoattractant concentration gradients, they demonstrate a biased random walk, with an "orientation bias" characterizing the fraction of cells moving up the gradient. A coherent picture of cell movement responses to chemoattractant requires that both the persistence time and the orientation bias be explained within a unifying framework. In this paper, we offer the possibility that "noise" in the cellular signal perception/response mechanism can simultaneously account for these two key phenomena. In particular, we develop a stochastic mathematical model for cell locomotion based on kinetic fluctuations in chemoattractant/receptor binding. This model can simulate cell paths similar to those observed experimentally, under conditions of uniform chemoattractant concentrations as well as chemoattractant concentration gradients. Furthermore, this model can quantitatively predict both cell persistence time and dependence of orientation bias on gradient size. Thus, the concept of signal "noise" can quantitatively unify the major characteristics of leukocyte random motility and chemotaxis. The same level of noise large enough to account for the observed frequency of turning in uniform environments is simultaneously small enough to allow for the observed degree of directional bias in gradients. PMID:3339093
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
Randomized Algorithms for Systems and Control: Theory and Applications
2008-05-01
does not display a currently valid OMB control number . 1. REPORT DATE MAY 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008 to 00-00-2008 4...TITLE AND SUBTITLE Randomized Algorithms for Systems and Control: Theory and Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT... NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) IEIIT-CNR
Elizabeth A. Freeman; Gretchen G. Moisen; John W. Coulston; Barry T. (Ty) Wilson
2015-01-01
As part of the development of the 2011 National Land Cover Database (NLCD) tree canopy cover layer, a pilot project was launched to test the use of high-resolution photography coupled with extensive ancillary data to map the distribution of tree canopy cover over four study regions in the conterminous US. Two stochastic modeling techniques, random forests (RF...
E. Freeman; G. Moisen; J. Coulston; B. Wilson
2014-01-01
Random forests (RF) and stochastic gradient boosting (SGB), both involving an ensemble of classification and regression trees, are compared for modeling tree canopy cover for the 2011 National Land Cover Database (NLCD). The objectives of this study were twofold. First, sensitivity of RF and SGB to choices in tuning parameters was explored. Second, performance of the...
Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa
2016-01-01
This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices. PMID:27338387
NASA Astrophysics Data System (ADS)
Tsai, C.; Hung, R. J.
2015-12-01
This study attempts to apply queueing theory to develop a stochastic framework that could account for the random-sized batch arrivals of incoming sediment particles into receiving waters. Sediment particles, control volume, mechanics of sediment transport (such as mechanics of suspension, deposition and resuspension) are treated as the customers, service facility and the server respectively in queueing theory. In the framework, the stochastic diffusion particle tracking model (SD-PTM) and resuspension of particles are included to simulate the random transport trajectories of suspended particles. The most distinguished characteristic of queueing theory is that customers come to the service facility in a random manner. In analogy to sediment transport, this characteristic is adopted to model the random-sized batch arrival process of sediment particles including the random occurrences and random magnitude of incoming sediment particles. The random occurrences of arrivals are simulated by Poisson process while the number of sediment particles in each arrival can be simulated by a binominal distribution. Simulations of random arrivals and random magnitude are proposed individually to compare with the random-sized batch arrival simulations. Simulation results are a probabilistic description for discrete sediment transport through ensemble statistics (i.e. ensemble means and ensemble variances) of sediment concentrations and transport rates. Results reveal the different mechanisms of incoming particles will result in differences in the ensemble variances of concentrations and transport rates under the same mean incoming rate of sediment particles.
Ma, Li; Fan, Suohai
2017-03-14
The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.
A multispin algorithm for the Kob-Andersen stochastic dynamics on regular lattices
NASA Astrophysics Data System (ADS)
Boccagna, Roberto
2017-07-01
The aim of the paper is to propose an algorithm based on the Multispin Coding technique for the Kob-Andersen glassy dynamics. We first give motivations to speed up the numerical simulation in the context of spin glass models [M. Mezard, G. Parisi, M. Virasoro, Spin Glass Theory and Beyond (World Scientific, Singapore, 1987)]; after defining the Markovian dynamics as in [W. Kob, H.C. Andersen, Phys. Rev. E 48, 4364 (1993)] as well as the related interesting observables, we extend it to the more general framework of random regular graphs, listing at the same time some known analytical results [C. Toninelli, G. Biroli, D.S. Fisher, J. Stat. Phys. 120, 167 (2005)]. The purpose of this work is a dual one; firstly, we describe how bitwise operators can be used to build up the algorithm by carefully exploiting the way data are stored on a computer. Since it was first introduced [M. Creutz, L. Jacobs, C. Rebbi, Phys. Rev. D 20, 1915 (1979); C. Rebbi, R.H. Swendsen, Phys. Rev. D 21, 4094 (1980)], this technique has been widely used to perform Monte Carlo simulations for Ising and Potts spin systems; however, it can be successfully adapted to more complex systems in which microscopic parameters may assume boolean values. Secondly, we introduce a random graph in which a characteristic parameter allows to tune the possible transition point. A consistent part is devoted to listing the numerical results obtained by running numerical simulations.
NASA Astrophysics Data System (ADS)
Schneider, Simon; Mueller, Marco; Janke, Wolfhard
2017-07-01
We investigate the behavior of the deviation of the estimator for the density of states (DOS) with respect to the exact solution in the course of Wang-Landau and Stochastic Approximation Monte Carlo (SAMC) simulations of the two-dimensional Ising model. We find that the deviation saturates in the Wang-Landau case. This can be cured by adjusting the refinement scheme. To this end, the 1 / t-modification of the Wang-Landau algorithm has been suggested. A similar choice of refinement scheme is employed in the SAMC algorithm. The convergence behavior of all three algorithms is examined. It turns out that the convergence of the SAMC algorithm is very sensitive to the onset of the refinement. Finally, the internal energy and specific heat of the Ising model are calculated from the SAMC DOS and compared to exact values.
Stochastic generation of explicit pore structures by thresholding Gaussian random fields
Hyman, Jeffrey D.; Winter, C. Larrabee
2014-11-15
We provide a description and computational investigation of an efficient method to stochastically generate realistic pore structures. Smolarkiewicz and Winter introduced this specific method in pores resolving simulation of Darcy flows (Smolarkiewicz and Winter, 2010 [1]) without giving a complete formal description or analysis of the method, or indicating how to control the parameterization of the ensemble. We address both issues in this paper. The method consists of two steps. First, a realization of a correlated Gaussian field, or topography, is produced by convolving a prescribed kernel with an initial field of independent, identically distributed random variables. The intrinsic length scales of the kernel determine the correlation structure of the topography. Next, a sample pore space is generated by applying a level threshold to the Gaussian field realization: points are assigned to the void phase or the solid phase depending on whether the topography over them is above or below the threshold. Hence, the topology and geometry of the pore space depend on the form of the kernel and the level threshold. Manipulating these two user prescribed quantities allows good control of pore space observables, in particular the Minkowski functionals. Extensions of the method to generate media with multiple pore structures and preferential flow directions are also discussed. To demonstrate its usefulness, the method is used to generate a pore space with physical and hydrological properties similar to a sample of Berea sandstone. -- Graphical abstract: -- Highlights: •An efficient method to stochastically generate realistic pore structures is provided. •Samples are generated by applying a level threshold to a Gaussian field realization. •Two user prescribed quantities determine the topology and geometry of the pore space. •Multiple pore structures and preferential flow directions can be produced. •A pore space based on Berea sandstone is generated.
Selecting Random Distributed Elements for HIFU using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yufeng
2011-09-01
As an effective and noninvasive therapeutic modality for tumor treatment, high-intensity focused ultrasound (HIFU) has attracted attention from both physicians and patients. New generations of HIFU systems with the ability to electrically steer the HIFU focus using phased array transducers have been under development. The presence of side and grating lobes may cause undesired thermal accumulation at the interface of the coupling medium (i.e. water) and skin, or in the intervening tissue. Although sparse randomly distributed piston elements could reduce the amplitude of grating lobes, there are theoretically no grating lobes with the use of concave elements in the new phased array HIFU. A new HIFU transmission strategy is proposed in this study, firing a number of but not all elements for a certain period and then changing to another group for the next firing sequence. The advantages are: 1) the asymmetric position of active elements may reduce the side lobes, and 2) each element has some resting time during the entire HIFU ablation (up to several hours for some clinical applications) so that the decreasing efficiency of the transducer due to thermal accumulation is minimized. Genetic algorithm was used for selecting randomly distributed elements in a HIFU array. Amplitudes of the first side lobes at the focal plane were used as the fitness value in the optimization. Overall, it is suggested that the proposed new strategy could reduce the side lobe and the consequent side-effects, and the genetic algorithm is effective in selecting those randomly distributed elements in a HIFU array.
NASA Astrophysics Data System (ADS)
Jin, Shi; Lu, Hanqing
2017-04-01
In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro-macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (in the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.
Elfering, Achim; Schade, Volker; Stoecklin, Lukas; Baur, Simone; Burger, Christian; Radlinger, Lorenz
2014-05-01
Slip, trip, and fall injuries are frequent among health care workers. Stochastic resonance whole-body vibration training was tested to improve postural control. Participants included 124 employees of a Swiss university hospital. The randomized controlled trial included an experimental group given 8 weeks of training and a control group with no intervention. In both groups, postural control was assessed as mediolateral sway on a force plate before and after the 8-week trial. Mediolateral sway was significantly decreased by stochastic resonance whole-body vibration training in the experimental group but not in the control group that received no training (p < .05). Stochastic resonance whole-body vibration training is an option in the primary prevention of balance-related injury at work.
Braumann, Andreas; Kraft, Markus; Wagner, Wolfgang
2010-10-01
This paper is concerned with computational aspects of a multidimensional population balance model of a wet granulation process. Wet granulation is a manufacturing method to form composite particles, granules, from small particles and binders. A detailed numerical study of a stochastic particle algorithm for the solution of a five-dimensional population balance model for wet granulation is presented. Each particle consists of two types of solids (containing pores) and of external and internal liquid (located in the pores). Several transformations of particles are considered, including coalescence, compaction and breakage. A convergence study is performed with respect to the parameter that determines the number of numerical particles. Averaged properties of the system are computed. In addition, the ensemble is subdivided into practically relevant size classes and analysed with respect to the amount of mass and the particle porosity in each class. These results illustrate the importance of the multidimensional approach. Finally, the kinetic equation corresponding to the stochastic model is discussed.
Berco, Dan Tseng, Tseung-Yuen
2015-12-21
This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.
Conditional random pattern algorithm for LOH inference and segmentation.
Wu, Ling-Yun; Zhou, Xiaobo; Li, Fuhai; Yang, Xiaorong; Chang, Chung-Che; Wong, Stephen T C
2009-01-01
Loss of heterozygosity (LOH) is one of the most important mechanisms in the tumor evolution. LOH can be detected from the genotypes of the tumor samples with or without paired normal samples. In paired sample cases, LOH detection for informative single nucleotide polymorphisms (SNPs) is straightforward if there is no genotyping error. But genotyping errors are always unavoidable, and there are about 70% non-informative SNPs whose LOH status can only be inferred from the neighboring informative SNPs. This article presents a novel LOH inference and segmentation algorithm based on the conditional random pattern (CRP) model. The new model explicitly considers the distance between two neighboring SNPs, as well as the genotyping error rate and the heterozygous rate. This new method is tested on the simulated and real data of the Affymetrix Human Mapping 500K SNP arrays. The experimental results show that the CRP method outperforms the conventional methods based on the hidden Markov model (HMM). Software is available upon request.
NASA Astrophysics Data System (ADS)
Wei, Lin-Yang; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming
2016-11-01
Inverse estimation of the refractive index distribution in one-dimensional participating media with graded refractive index (GRI) is investigated. The forward radiative transfer problem is solved by the Chebyshev collocation spectral method. The stochastic particle swarm optimization (SPSO) algorithm is employed to retrieve three kinds of GRI distribution, i.e. the linear, sinusoidal and quadratic GRI distribution. The retrieval accuracy of GRI distribution with different wall emissivity, optical thickness, absorption coefficients and scattering coefficients are discussed thoroughly. To improve the retrieval accuracy of quadratic GRI distribution, a double-layer model is proposed to supply more measurement information. The influence of measurement errors upon the precision of estimated results is also investigated. Considering the GRI distribution is unknown beforehand in practice, a quadratic function is employed to retrieve the linear GRI by SPSO algorithm. All the results show that the SPSO algorithm is applicable to retrieve different GRI distributions in participating media accurately even with noisy data.
Deng, Haishan; Xiang, Bingren; Xie, Shaofei; Zhou, Xiaohua
2007-01-01
The following paper addresses an attempt to determine the trace levels of two benzimidazole fungicides (carbendazim, CAS 10605-21-7 and thiabendazole, CAS 148-79-8) in drinking water samples using the newly proposed linear modulated stochastic resonance algorithm. In order to implement an adaptive and intelligent algorithm, a two-step optimization procedure was developed for the parameter selection to give attention to both the signal-to-noise ratio and the peak shape of output signal. How to limit the ranges of the parameters to be searched was discussed in detail. The limits of detection for carbendazim and thiabendazole were improved to 0.012 microg x L(-1) and 0.015 microg x L(-1), respectively. The successful application demonstrated the ability of the algorithm for detecting two or more weak chromatographic peaks simultaneously.
Random matrix approach to quantum adiabatic evolution algorithms
Boulatov, A.; Smelyanskiy, V.N.
2005-05-15
We analyze the power of the quantum adiabatic evolution algorithm (QAA) for solving random computationally hard optimization problems within a theoretical framework based on random matrix theory (RMT). We present two types of driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that nonadiabatic corrections in the QAA are due to the interaction of the ground state with the 'cloud' formed by most of the excited states, confirming that in driven RMT models, the Landau-Zener scenario of pairwise level repulsions is not relevant for the description of nonadiabatic corrections. We show that the QAA has a finite probability of success in a certain range of parameters, implying a polynomial complexity of the algorithm. The second model corresponds to the standard QAA with the problem Hamiltonian taken from the RMT Gaussian unitary ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. For this reason, the driven GUE model can also lead to polynomial complexity of the QAA. The main contribution to the failure probability of the QAA comes from the nonadiabatic corrections to the eigenstates, which only depend on the absolute values of the transition amplitudes. Due to the mapping between the two models, these absolute values are the same in both cases. Our results indicate that this 'phase irrelevance' is the leading effect that can make both the Markovian- and GUE-type QAAs successful.
Luo, Chao; Wang, Xingyuan
2013-01-01
A novel algebraic approach is proposed to study dynamics of asynchronous random Boolean networks where a random number of nodes can be updated at each time step (ARBNs). In this article, the logical equations of ARBNs are converted into the discrete-time linear representation and dynamical behaviors of systems are investigated. We provide a general formula of network transition matrices of ARBNs as well as a necessary and sufficient algebraic criterion to determine whether a group of given states compose an attractor of length in ARBNs. Consequently, algorithms are achieved to find all of the attractors and basins in ARBNs. Examples are showed to demonstrate the feasibility of the proposed scheme. PMID:23785502
Inner Random Restart Genetic Algorithm for Practical Delivery Schedule Optimization
NASA Astrophysics Data System (ADS)
Sakurai, Yoshitaka; Takada, Kouhei; Onoyama, Takashi; Tsukamoto, Natsuki; Tsuruta, Setsuo
A delivery route optimization that improves the efficiency of real time delivery or a distribution network requires solving several tens to hundreds but less than 2 thousands cities Traveling Salesman Problems (TSP) within interactive response time (less than about 3 second), with expert-level accuracy (less than about 3% of error rate). Further, to make things more difficult, the optimization is subjects to special requirements or preferences of each various delivery sites, persons, or societies. To meet these requirements, an Inner Random Restart Genetic Algorithm (Irr-GA) is proposed and developed. This method combines meta-heuristics such as random restart and GA having different types of simple heuristics. Such simple heuristics are 2-opt and NI (Nearest Insertion) methods, each applied for gene operations. The proposed method is hierarchical structured, integrating meta-heuristics and heuristics both of which are multiple but simple. This method is elaborated so that field experts as well as field engineers can easily understand to make the solution or method easily customized and extended according to customers' needs or taste. Comparison based on the experimental results and consideration proved that the method meets the above requirements more than other methods judging from not only optimality but also simplicity, flexibility, and expandability in order for this method to be practically used.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges
2012-05-01
A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.
Hermes, Matthew R.; Hirata, So
2014-12-28
A stochastic algorithm based on Metropolis Monte Carlo (MC) is presented for the size-extensive vibrational self-consistent field methods (XVSCF(n) and XVSCF[n]) for anharmonic molecular vibrations. The new MC-XVSCF methods substitute stochastic evaluations of a small number of high-dimensional integrals of functions of the potential energy surface (PES), which is sampled on demand, for diagrammatic equations involving high-order anharmonic force constants. This algorithm obviates the need to evaluate and store any high-dimensional partial derivatives of the potential and can be applied to the fully anharmonic PES without any Taylor-series approximation in an intrinsically parallelizable algorithm. The MC-XVSCF methods reproduce deterministic XVSCF calculations on the same Taylor-series PES in all energies, frequencies, and geometries. Calculations using the fully anharmonic PES evaluated on the fly with electronic structure methods report anharmonic effects on frequencies and geometries of much greater magnitude than deterministic XVSCF calculations, reflecting an underestimation of anharmonic effects in a Taylor-series approximation to the PES.
The Copenhagen Triage Algorithm: a randomized controlled trial.
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia; Ravn, Lisbet; Lind, Morten; Greibe, Rasmus; Jensen, Birgitte Nybo; Rasmussen, Lars S; Iversen, Kasper
2016-10-10
Crowding in the emergency department (ED) is a well-known problem resulting in an increased risk of adverse outcomes. Effective triage might counteract this problem by identifying the sickest patients and ensuring early treatment. In the last two decades, systematic triage has become the standard in ED's worldwide. However, triage models are also time consuming, supported by limited evidence and could potentially be of more harm than benefit. The aim of this study is to develop a quicker triage model using data from a large cohort of unselected ED patients and evaluate if this new model is non-inferior to an existing triage model in a prospective randomized trial. The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years (n = 50.000) admitted to the ED in two large acute hospitals. Centers are randomly assigned to perform either CTA or DEPT triage first and then use the other triage model in the last time period. The CTA stratifies patients into 5 acuity levels in two steps. First, a scoring chart based on vital values is used to classify patients in an immediate category. Second, a clinical assessment by the ED nurse can alter the result suggested by the score up to two categories up or one down. The primary end-point is 30-day mortality and secondary end-points are length of stay, time to treatment, admission to intensive care unit, and readmission within 30 days. If proven non-inferior to standard DEPT triage, CTA will be a faster and simpler triage model that is still able to detect the critically ill. Simplifying triage will lessen the burden for the ED staff and possibly allow faster treatment. Clinicaltrials.gov: NCT02698319 , registered 24. of February 2016, retrospectively registered.
Li, Tong; Gu, YuanTong
2014-04-15
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.
MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA
Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D
2013-01-01
Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.
Hong, Dawei; Man, Shushuang; Martin, Joseph V
2016-01-21
There are two functionally important factors in signal propagation in a brain structural network: the very first synaptic delay-a time delay about 1ms-from the moment when signals originate to the moment when observation on the signal propagation can begin; and rapid random fluctuations in membrane potentials of every individual neuron in the network at a timescale of microseconds. We provide a stochastic analysis of signal propagation in a general setting. The analysis shows that the two factors together result in a stochastic mechanism for the signal propagation as described below. A brain structural network is not a rigid circuit rather a very flexible framework that guides signals to propagate but does not guarantee success of the signal propagation. In such a framework, with the very first synaptic delay, rapid random fluctuations in every individual neuron in the network cause an "alter-and-concentrate effect" that almost surely forces signals to successfully propagate. By the stochastic mechanism we provide analytic evidence for the existence of a force behind signal propagation in a brain structural network caused by rapid random fluctuations in every individual neuron in the network at a timescale of microseconds with a time delay of 1ms. Published by Elsevier Ltd.
A stochastically fully connected conditional random field framework for super resolution OCT
NASA Astrophysics Data System (ADS)
Boroomand, A.; Tan, B.; Wong, A.; Bizheva, K.
2017-02-01
A number of factors can degrade the resolution and contrast of OCT images, such as: (1) changes of the OCT pointspread function (PSF) resulting from wavelength dependent scattering and absorption of light along the imaging depth (2) speckle noise, as well as (3) motion artifacts. We propose a new Super Resolution OCT (SR OCT) imaging framework that takes advantage of a Stochastically Fully Connected Conditional Random Field (SF-CRF) model to generate a Super Resolved OCT (SR OCT) image of higher quality from a set of Low-Resolution OCT (LR OCT) images. The proposed SF-CRF SR OCT imaging is able to simultaneously compensate for all of the factors mentioned above, that degrade the OCT image quality, using a unified computational framework. The proposed SF-CRF SR OCT imaging framework was tested on a set of simulated LR human retinal OCT images generated from a high resolution, high contrast retinal image, and on a set of in-vivo, high resolution, high contrast rat retinal OCT images. The reconstructed SR OCT images show considerably higher spatial resolution, less speckle noise and higher contrast compared to other tested methods. Visual assessment of the results demonstrated the usefulness of the proposed approach in better preservation of fine details and structures of the imaged sample, retaining biological tissue boundaries while reducing speckle noise using a unified computational framework. Quantitative evaluation using both Contrast to Noise Ratio (CNR) and Edge Preservation (EP) parameter also showed superior performance of the proposed SF-CRF SR OCT approach compared to other image processing approaches.
ERIC Educational Resources Information Center
Argoti, A.; Fan, L. T.; Cruz, J.; Chou, S. T.
2008-01-01
The stochastic simulation of chemical reactions, specifically, a simple reversible chemical reaction obeying the first-order, i.e., linear, rate law, has been presented by Martinez-Urreaga and his collaborators in this journal. The current contribution is intended to complement and augment their work in two aspects. First, the simple reversible…
ERIC Educational Resources Information Center
Argoti, A.; Fan, L. T.; Cruz, J.; Chou, S. T.
2008-01-01
The stochastic simulation of chemical reactions, specifically, a simple reversible chemical reaction obeying the first-order, i.e., linear, rate law, has been presented by Martinez-Urreaga and his collaborators in this journal. The current contribution is intended to complement and augment their work in two aspects. First, the simple reversible…
Using random forest algorithm to predict β-hairpin motifs.
Jia, Shao-Chun; Hu, Xiu-Zhen
2011-06-01
A novel method is presented for predicting β-hairpin motifs in protein sequences. That is Random Forest algorithm on the basis of the multi-characteristic parameters, which include amino acids component of position, hydropathy component of position, predicted secondary structure information and value of auto-correlation function. Firstly, the method is trained and tested on a set of 8,291 β-hairpin motifs and 6,865 non-β-hairpin motifs. The overall accuracy and Matthew's correlation coefficient achieve 82.2% and 0.64 using 5-fold cross-validation, while they achieve 81.7% and 0.63 using the independent test. Secondly, the method is also tested on a set of 4,884 β-hairpin motifs and 4,310 non-β-hairpin motifs which is used in previous studies. The overall accuracy and Matthew's correlation coefficient achieve 80.9% and 0.61 for 5-fold cross-validation, while they achieve 80.6% and 0.60 for the independent test. Compared with the previous, the present result is better. Thirdly, 4,884 β-hairpin motifs and 4,310 non-β-hairpin motifs selected as the training set, and 8,291 β-hairpin motifs and 6,865 non-β-hairpin motifs selected as the independent testing set, the overall accuracy and Matthew's correlation coefficient achieve 81.5% and 0.63 with the independent test.
NASA Astrophysics Data System (ADS)
Pulkkinen, A.; Klimas, A.; Vassiliadis, D.; Uritsky, V.
2005-12-01
Understanding the evolution of bursts of activity in the magnetosphere-ionosphere system has been one of the central challenges in space physics since, and even prior to the introduction of the term "substorm". An extensive amount of work has been put to the characterization of the average near-space plasma environment behavior during substorms and several more or less deterministic models have been introduced to explain the observations. However, although most of substorms seem to have some common characteristics (otherwise any classification would be completely meaningless), like intensification of auroral electric currents, dipolarization of the magnetotail and injections of plasma sheet charged particles, each substorm has its distinct features in terms of strong fluctuations around the average "typical" behavior. This highly complex nature of individual substorms suggests that stochastic processes may play a role, even a central one in the evolution of substorms. In this work, we develop a simple stochastic model for the AE-index variations to investigate the role of random fluctuations in the substorm phenomenon. We show that by the introduction of a stochastic component, we are able to capture some fundamental features of the AE-index variations. More specifically, complex variations associated with individual bursts are a central part of the model. It will be demonstrated that by analyzing the structure of the constructed stochastic model some presently open questions about substorm-related bursts of the AE-index can be addressed quantitatively. First and foremost, it will be shown that the stochastic fluctuations are a fundamental part of the AE-index evolution and cannot be neglected even when the average properties of the index are of interest.
Biased Randomized Algorithm for Fast Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Williams, Colin; Vartan, Farrokh
2005-01-01
A biased randomized algorithm has been developed to enable the rapid computational solution of a propositional- satisfiability (SAT) problem equivalent to a diagnosis problem. The closest competing methods of automated diagnosis are described in the preceding article "Fast Algorithms for Model-Based Diagnosis" and "Two Methods of Efficient Solution of the Hitting-Set Problem" (NPO-30584), which appears elsewhere in this issue. It is necessary to recapitulate some of the information from the cited articles as a prerequisite to a description of the present method. As used here, "diagnosis" signifies, more precisely, a type of model-based diagnosis in which one explores any logical inconsistencies between the observed and expected behaviors of an engineering system. The function of each component and the interconnections among all the components of the engineering system are represented as a logical system. Hence, the expected behavior of the engineering system is represented as a set of logical consequences. Faulty components lead to inconsistency between the observed and expected behaviors of the system, represented by logical inconsistencies. Diagnosis - the task of finding the faulty components - reduces to finding the components, the abnormalities of which could explain all the logical inconsistencies. One seeks a minimal set of faulty components (denoted a minimal diagnosis), because the trivial solution, in which all components are deemed to be faulty, always explains all inconsistencies. In the methods of the cited articles, the minimal-diagnosis problem is treated as equivalent to a minimal-hitting-set problem, which is translated from a combinatorial to a computational problem by mapping it onto the Boolean-satisfiability and integer-programming problems. The integer-programming approach taken in one of the prior methods is complete (in the sense that it is guaranteed to find a solution if one exists) and slow and yields a lower bound on the size of the
Haron, Zaiton; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri
2015-01-01
Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces. PMID:25875019
Han, Lim Ming; Haron, Zaiton; Yahya, Khairulzan; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri
2015-01-01
Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces.
NASA Astrophysics Data System (ADS)
Samouylov, K.; Sopin, E.; Vikhrova, O.; Shorgin, S.
2017-07-01
We suggest a convolution algorithm for calculating the normalization constant for stationary probabilities of a multiserver queuing system with random resource requirements. Our algorithm significantly reduces computing time of the stationary probabilities and system characteristics such as blocking probabilities and average number of occupied resources. The algorithm aims to avoid calculation of k-fold convolutions and reasonably use memory resources.
On Relaxation Algorithms Based on Markov Random Fields.
1987-07-10
C-0008., This contract support the Northeast Artificial Intelliglence Consortiu ( NAIC ). Trhis work was also supporte in part by U.S.Army Engineering...34continuation" (Fig. 4c) configurations simultaneously would be of little use, as the increases would tend to cancel each other out. The sensitivity of the...as are those obtained by applying 3X3 Kirsch operators with non- maximum suppression . The annealing schedule for the stochastic MAP follows the one
NASA Astrophysics Data System (ADS)
Sepahvand, K.; Marburg, S.
2015-03-01
A non-sampling probability identification method based on the generalized polynomial chaos (gPC) expansion is adopted for estimating random parameters of composite plates form experimental eigenfrequencies. For that, the parameters and the eigenfrequencies are approximated using gPC expansion. Distribution functions of the eigenfrequencies are identified from experimental data employing the Bayesian inference. This identification is then used to construct a vector of random variables and an orthogonal basis for eigenfrequency expansions. The parameters are characterized by the gPC having unknown deterministic coefficients and the same random basis as the eigenfrequencies. The stochastic finite element simulation of the plates bears as the model from which the parameter coefficients are estimated via an inverse problem. The major advantage of the method is using deterministic identification procedure. An application is presented for which samples of orthotropic laminated plates are tested to identify E-moduli, shear modulus and the major Poisson's ratio from measured modal frequencies.
NASA Astrophysics Data System (ADS)
He, Guitian; Guo, Dali; Tian, Yan; Li, Tiejun; Luo, Maokang
2017-10-01
The generalized stochastic resonance (GSR) and the bona fide stochastic resonance (SR) in a generalized Langevin equation driven by a periodic signal, multiplicative noise and Mittag-Leffler noise are extensively investigated. The expression of the frequency spectrum of the Mittag-Leffler noise is studied. Using the Shapiro-Loginov formula and Laplace transformation technique, the exact expressions of the output amplitude gain and the signal-to-noise ratio are obtained. The simulation results turn out that the output amplitude gain and the signal-to-noise ratio are non-monotonic functions of the characteristics of noise parameters and system parameters. Especially, the influence of the memory exponent and memory time of Mittag-Leffler noise could induce the GSR phenomenon. The influence of the driving frequency could induce the bona fide stochastic resonance. It is found that the system with fractional memory exponent could be more easily induced SR phenomenon than the system with integer memory exponent.
Diffusion and stochastic island generation in the magnetic field line random walk
Vlad, M.; Spineanu, F.
2014-08-10
The cross-field diffusion of field lines in stochastic magnetic fields described by the 2D+slab model is studied using a semi-analytic statistical approach, the decorrelation trajectory method. We show that field line trapping and the associated stochastic magnetic islands strongly influence the diffusion coefficients, leading to dependences on the parameters that are different from the quasilinear and Bohm regimes. A strong amplification of the diffusion is produced by a small slab field in the presence of trapping. The diffusion regimes are determined and the corresponding physical processes are identified.
Koh, Wonryull; Blackwell, Kim T.
2011-01-01
Stochastic simulation of reaction–diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction–diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies. PMID:21513371
NASA Technical Reports Server (NTRS)
Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.
1997-01-01
Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.
Sidje, R B; Vo, H D
2015-11-01
The mathematical framework of the chemical master equation (CME) uses a Markov chain to model the biochemical reactions that are taking place within a biological cell. Computing the transient probability distribution of this Markov chain allows us to track the composition of molecules inside the cell over time, with important practical applications in a number of areas such as molecular biology or medicine. However the CME is typically difficult to solve, since the state space involved can be very large or even countably infinite. We present a novel way of using the stochastic simulation algorithm (SSA) to reduce the size of the finite state projection (FSP) method. Numerical experiments that demonstrate the effectiveness of the reduction are included.
Stochastic nonlinear wave equation with memory driven by compensated Poisson random measures
Liang, Fei; Gao, Hongjun
2014-03-15
In this paper, we study a class of stochastic nonlinear wave equation with memory driven by Lévy noise. We first show the existence and uniqueness of global mild solutions using a suitable energy function. Second, under some additional assumptions we prove the exponential stability of the solutions.
NASA Astrophysics Data System (ADS)
Leimkuhler, Ben; Margul, Daniel T.; Tuckerman, Mark E.
2013-12-01
Molecular dynamics is one of the most commonly used approaches for studying the dynamics and statistical distributions of physical, chemical, and biological systems using atomistic or coarse-grained models. It is often the case, however, that the interparticle forces drive motion on many time scales, and the efficiency of a calculation is limited by the choice of time step, which must be sufficiently small that the fastest force components are accurately integrated. Multiple time-stepping algorithms partially alleviate this inefficiency by assigning to each time scale an appropriately chosen step-size. As the fast forces are often computationally cheaper to evaluate than the slow forces, this results in a significant gain in efficiency. However, such approaches are limited by resonance phenomena, wherein motion on the fastest time scales limits the step sizes associated with slower time scales. In atomistic models of biomolecular systems, for example, resonances limit the largest time step to around 5-6 fs. Stochastic processes promote mixing and ergodicity in dynamical systems and reduce the impact of resonant modes. In this paper, we introduce a set of stochastic isokinetic equations of motion that are shown to be rigorously ergodic, largely free of resonances, and can be integrated using a multiple time-stepping algorithm which is easily implemented in existing molecular dynamics codes. The technique is applied to a simple, illustrative problem and then to a more realistic system, namely, a flexible water model. Using this approach outer time steps as large as 100 fs are shown to be possible.
Dulikravich, George S.; Sikka, Vinod K.; Muralidharan, G.
2006-06-01
The goal of this project was to adapt and use an advanced semi-stochastic algorithm for constrained multiobjective optimization and combine it with experimental testing and verification to determine optimum concentrations of alloying elements in heat-resistant and corrosion-resistant H-series stainless steel alloys that will simultaneously maximize a number of alloy's mechanical and corrosion properties.
A novel quantum random number generation algorithm used by smartphone camera
NASA Astrophysics Data System (ADS)
Wu, Nan; Wang, Kun; Hu, Haixing; Song, Fangmin; Li, Xiangdong
2015-05-01
We study an efficient algorithm to extract quantum random numbers (QRN) from the raw data obtained by charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) based sensors, like a camera used in a commercial smartphone. Based on NIST statistical test for random number generators, the proposed algorithm has a high QRN generation rate and high statistical randomness. This algorithm provides a kind of simple, low-priced and reliable devices as a QRN generator for quantum key distribution (QKD) or other cryptographic applications.
Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications
2016-10-17
finite element building blocks, including the assembly of the element load vectors and element stiffness matrices . Hierarchic bases have been prevalent...most efficient hierarchic bases currently in use. An algorithm for the construction of the element matrices in optimal complexity for uniform order...constructs the non-uniform order element matrices in optimal complexity. In the final part of this work, we extend the algorithm of [2] to the non-uniform
Wolesensky, William; Logan, J David
2007-01-01
We model the effects of both stochastic and deterministic temperature variations on arthropod predator-prey systems. Specifically, we study the stochastic dynamics of arthropod predator-prey interactions under a vary ing temperature regime, and we develop an individual model of a prey under pressure from a predator, with vigilance (or foraging effort), search rates, at tack rates, and other predation parameters dependent on daily temperature variations. Simulations suggest that an increase in the daily average temperature may benefit both predator and prey. Furthermore, simulations show that anti-predator behavior may indeed decrease predation but at the expense of reduced prey survivorship because of a greater increase in other types of mortality.
Can stochastic, dissipative wave fields be treated as random walk generators
NASA Technical Reports Server (NTRS)
Weinstock, J.
1986-01-01
A suggestion by Meek et al. (1985) that the gravity wave field be viewed as stochastic, with significant nonlinearities, is applied to calculate diffusivities. The purpose here is to calculate the diffusivity for stochastic wave model and compare it with previous diffusivity estimates. The researchers do this for an idealized case in which the wind velocity changes but slowly, and for which saturation is the principal mechanism by which wave energy is lost. A related calculation was given in a very brief way (Weinstock, 1976), but the approximations were not fully justified, nor were the physical pre-suppositions clearly explained. The observations of Meek et al. (1985) have clarified the pre-suppositions for the researchers and provided a rationalization and improvement of the approximations employed.
Exact Mapping of the Stochastic Field Theory for Manna Sandpiles to Interfaces in Random Media
NASA Astrophysics Data System (ADS)
Le Doussal, Pierre; Wiese, Kay Jörg
2015-03-01
We show that the stochastic field theory for directed percolation in the presence of an additional conservation law [the conserved directed-percolation (C-DP) class] can be mapped exactly to the continuum theory for the depinning of an elastic interface in short-range correlated quenched disorder. Along one line of the parameters commonly studied, this mapping leads to the simplest overdamped dynamics. Away from this line, an additional memory term arises in the interface dynamics; we argue that this does not change the universality class. Since C-DP is believed to describe the Manna class of self-organized criticality, this shows that Manna stochastic sandpiles and disordered elastic interfaces (i.e., the quenched Edwards-Wilkinson model) share the same universal large-scale behavior.
An improved label propagation algorithm based on the similarity matrix using random walk
NASA Astrophysics Data System (ADS)
Zhang, Xian-Kun; Song, Chen; Jia, Jia; Lu, Zeng-Lei; Zhang, Qian
2016-05-01
Community detection based on label propagation algorithm (LPA) has attracted widespread concern because of its high efficiency. But it is difficult to guarantee the accuracy of community detection as the label spreading is random in the algorithm. In response to the problem, an improved LPA based on random walk (RWLPA) is proposed in this paper. Firstly, a matrix measuring similarity among various nodes in the network is obtained through calculation. Secondly, during the process of label propagation, when a node has more than a neighbor label with the highest frequency, not the label of a random neighbor but the label of the neighbor with the highest similarity will be chosen to update. It can avoid label propagating randomly among communities. Finally, we test LPA and the improved LPA in benchmark networks and real-world networks. The results show that the quality of communities discovered by the improved algorithm is improved compared with the traditional algorithm.
NASA Astrophysics Data System (ADS)
Atanassov, E.; Dimitrov, D.; Gurov, T.
2015-10-01
The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.
Atanassov, E.; Dimitrov, D. E-mail: emanouil@parallel.bas.bg Gurov, T.
2015-10-28
The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.
Probabilistic Analysis of Random Extension-Rotation Algorithms
1981-10-01
notions, such as "maximal with a given probability ." Properties of random indepen- dence systems much as the existence of an independent set of given...of M. Then (VE 0 ) has the same probability in random graph G as in M and of in the set of all simple paths inn,p A (V3ON) Formulation as a proper RISE...RIS is bigger than a certain value, then rotation succeeds with probability one in finding short augmentation sequences in a random instance of the RIS
Stochastic models and numerical algorithms for a class of regulatory gene networks.
Fournier, Thomas; Gabriel, Jean-Pierre; Pasquier, Jerôme; Mazza, Christian; Galbete, José; Mermod, Nicolas
2009-08-01
Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.
2012-03-01
153 2.3 GPOPS Differential Algebraic Equations Function: MyProblemDAE.m...71 DAE Differential Algebraic Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 PD...72 A Sigma- algebra of the random space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 P Probability measure
A Simple Window Random Access Algorithm with Advantageous Properties.
1987-07-01
the proposed algorithm is adopted Ir . c.-, ,- user model is valid, the Part-and-Try algorithm with h.narn teedh . .r b .L ’I- x: operate in the... nk1 5, + ml,,; w.p. e -~)~ n2!0 eXP(kop)Pk-(-p-1);k >2k! n+p+2=s For d > A: Md~s = nk., + md...A+Is ; w.p.e -AAk = nk-1,, + Md-n(A,&iI+-s W.P. e-~~)(A
NASA Astrophysics Data System (ADS)
Dokou, Zoi; Pinder, George F.
2011-02-01
SummaryThe design of an effective groundwater remediation system involves the determination of the source zone characteristics and subsequent source zone removal. The work presented in this paper focuses on the three-dimensional extension and field application of a previously described source zone identification and delineation algorithm. The three-dimensional search algorithm defines how to achieve an acceptable level of accuracy regarding the strength, geographic location and depth of a dense non-aqueous phase liquid (DNAPL) source while using the least possible number of water quality samples. Target locations and depths of potential sources are identified and given initial importance measures or weights using a technique that exploits expert knowledge. The weights reflect the expert's confidence that the particular source location is the correct one and they are updated as the investigation proceeds. The overall strategy uses stochastic groundwater flow and transport modeling assuming that hydraulic conductivity is known with uncertainty (Monte Carlo approach). Optimal water quality samples are selected according to the degree to which they contribute to the total concentration uncertainty reduction across all model layers and the proximity of the samples to the potential source locations. After a sample is taken, the contaminant concentration plume is updated using a Kalman filter. The set of optimal source strengths is determined using linear programming by minimizing the sum of the absolute differences between modeled and measured concentration values at sampling locations. The Monte Carlo generated suite of plumes emanating from each individual source is calculated and compared with the updated plume. The scores obtained from this comparison serve to update the weights initially assigned by the expert, and the above steps are repeated until the optimal source characteristics are determined. The algorithm's effectiveness is demonstrated by performing a
Community Detection Algorithm Combining Stochastic Block Model and Attribute Data Clustering
NASA Astrophysics Data System (ADS)
Kataoka, Shun; Kobayashi, Takuto; Yasuda, Muneki; Tanaka, Kazuyuki
2016-11-01
We propose a new algorithm to detect the community structure in a network that utilizes both the network structure and vertex attribute data. Suppose we have the network structure together with the vertex attribute data, that is, the information assigned to each vertex associated with the community to which it belongs. The problem addressed this paper is the detection of the community structure from the information of both the network structure and the vertex attribute data. Our approach is based on the Bayesian approach that models the posterior probability distribution of the community labels. The detection of the community structure in our method is achieved by using belief propagation and an EM algorithm. We numerically verified the performance of our method using computer-generated networks and real-world networks.
Quantization of Random Walks: Search Algorithms and Hitting Time
NASA Astrophysics Data System (ADS)
Santha, Miklos
Many classical search problems can be cast in the following abstract framework: Given a finite set X and a subset M ⊆ X of marked elements, detect if M is empty or not, or find an element in M if there is any. When M is not empty, a naive approach to the finding problem is to repeatedly pick a uniformly random element of X until a marked element is sampled. A more sophisticated approach might use a Markov chain, that is a random walk on the state space X in order to generate the samples. In that case the resources spent for previous steps are often reused to generate the next sample. Random walks also model spatial search in physical regions where the possible moves are expressed by the edges of some specific graph. The hitting time of a Markov chain is the number of steps necessary to reach a marked element, starting from the stationary distribution of the chain.
On Convergence of the Nelder-Mead Simplex Algorithm for Unconstrained Stochastic Optimization
1995-05-01
the best point. Reklaitis , Ravindran, and Ragsdell (1983) further classify direct-search methods into two classes: heuristic techniques and...restricted conditions" ( Reklaitis et al., 1983, p. 75). The Nelder-Mead algorithm is a heuristic direct-search method. An example of a theoretically...Pressure Liquid Cliromatography. Analytica Clumica Ada, 93, 211-219. Reklaitis , G. V., Ravindran, A., & Ragsdell, K. M. (1983). Engineering
A Randomized Approximate Nearest Neighbors Algorithm - A Short Version
2011-01-13
20172. [8] D. Knuth (1969) in Seminumerical Algorithms, vol. 2 of The Art of Computer Pro- gramming, Reading, Mass: Addison-Wesley. [9] N. Ailon, E...ORGANIZATION NAME(S) AND ADDRESS(ES) Yale University,Department of Computer Science,New Haven,CT,06520 8. PERFORMING ORGANIZATION REPORT NUMBER 9...points is the standard Euclidean distance. For each xi, one can compute in a straightforward manner the distances to the rest of the points and thus
NASA Astrophysics Data System (ADS)
Ma, Tianren; Xia, Zhengyou
2017-05-01
Currently, with the rapid development of information technology, the electronic media for social communication is becoming more and more popular. Discovery of communities is a very effective way to understand the properties of complex networks. However, traditional community detection algorithms consider the structural characteristics of a social organization only, with more information about nodes and edges wasted. In the meanwhile, these algorithms do not consider each node on its merits. Label propagation algorithm (LPA) is a near linear time algorithm which aims to find the community in the network. It attracts many scholars owing to its high efficiency. In recent years, there are more improved algorithms that were put forward based on LPA. In this paper, an improved LPA based on random walk and node importance (NILPA) is proposed. Firstly, a list of node importance is obtained through calculation. The nodes in the network are sorted in descending order of importance. On the basis of random walk, a matrix is constructed to measure the similarity of nodes and it avoids the random choice in the LPA. Secondly, a new metric IAS (importance and similarity) is calculated by node importance and similarity matrix, which we can use to avoid the random selection in the original LPA and improve the algorithm stability. Finally, a test in real-world and synthetic networks is given. The result shows that this algorithm has better performance than existing methods in finding community structure.
A Randomized Gossip Consenus Algorithm on Convex Metric Spaces
2012-01-01
655–661, May 2005. [20] A. Tahbaz Salehi and A. Jadbabaie. Necessary and sufficient conditions for consensus over random networks. IEEE Trans. Autom...Control, 53(3):791–795, Apr 2008. [21] A. Tahbaz Salehi and A. Jadbabaie. Consensus over ergodic stationary graph processes. IEEE Trans. Autom
NASA Astrophysics Data System (ADS)
Liu, Yuexin; Metzner, John J.; Guo, Ruyan; Yu, Francis T. S.
2005-09-01
An efficient and secure algorithm for random phase mask generation used in optical data encryption and transmission system is proposed, based on Diffie-Hellman public key distribution. Thus-generated random mask has higher security due to the fact that it is never exposed to the vulnerable transmitting channels. The effectiveness to retrieve the original image and its robustness against blind manipulation have been demonstrated by our numerical results. In addition, this algorithm can be easily extended to multicast networking system and refresh of this shared random key is also very simple to implement.
Vaccine enhanced extinction in stochastic epidemic models
NASA Astrophysics Data System (ADS)
Billings, Lora; Mier-Y-Teran, Luis; Schwartz, Ira
2012-02-01
We address the problem of developing new and improved stochastic control methods that enhance extinction in disease models. In finite populations, extinction occurs when fluctuations owing to random transitions act as an effective force that drives one or more components or species to vanish. Using large deviation theory, we identify the location of the optimal path to extinction in epidemic models with stochastic vaccine controls. These models not only capture internal noise from random transitions, but also external fluctuations, such as stochastic vaccination scheduling. We quantify the effectiveness of the randomly applied vaccine over all possible distributions by using the location of the optimal path, and we identify the most efficient control algorithms. We also discuss how mean extinction times scale with epidemiological and social parameters.
Scalable data parallel algorithms for texture synthesis using Gibbs random fields.
Bader, D A; Jaja, J; Chellappa, R
1995-01-01
This article introduces scalable data parallel algorithms for image processing. Focusing on Gibbs and Markov random field model representation for textures, we present parallel algorithms for texture synthesis, compression, and maximum likelihood parameter estimation, currently implemented on Thinking Machines CM-2 and CM-5. The use of fine-grained, data parallel processing techniques yields real-time algorithms for texture synthesis and compression that are substantially faster than the previously known sequential implementations. Although current implementations are on Connection Machines, the methodology presented enables machine-independent scalable algorithms for a number of problems in image processing and analysis.
2010-11-01
application type of analysis, only the methodology is presented here, which includes an algorithm for optimization and a corresponding conservative rate...of convergence based on no learning. The application part will be presented in the near future once data are available. It is expected that the...flux particuliers entre des paires de nœuds particulières. Bien qu’il s’agisse d’un type de mise en application d’analyse, seulement les méthodologies
Botet, Robert; Kuratsuji, Hiroshi
2010-03-01
We present a framework for the stochastic features of the polarization state of an electromagnetic wave propagating through the optical medium with both deterministic (controlled) and disordered birefringence. In this case, the Stokes parameters obey a Langevin-type equation on the Poincaré sphere. The functional integral method provides for a natural tool to derive the Fokker-Planck equation for the probability distribution of the Stokes parameters. We solve the Fokker-Planck equation in the case of a random anisotropic active medium submitted to a homogeneous electromagnetic field. The possible dissipation and relaxation phenomena are studied in general and in various cases, and we give hints about how to validate experimentally the corresponding phenomenological equations.
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Li, Rui; Xiang, Bingren; Deng, Haishan; Xie, Shaofei
2017-08-17
As a potential tool for amplifying weak chromatographic peaks, the stochastic resonance algorithm was developed based upon a counterintuitive physical phenomenon. Therefore, the essential step, parameter optimization, was perplexing and difficult for analysts. In order to avoid optimizing the system parameters on a case-by-case basis, an improved algorithm was proposed by introducing a constant or direct current signal into the signal to be measured as the external force. The weak chromatographic peak can be amplified and detected by the new algorithm using the same set of parameters. Two sets of our previous experimental data were reanalyzed by using the developed algorithm and the results were satisfactory. A generalized solution was expected to come into being on account of the new algorithm. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gholami-Boroujeny, Shiva; Bolic, Miodrag
2016-04-01
Fitting the measured bioimpedance spectroscopy (BIS) data to the Cole model and then extracting the Cole parameters is a common practice in BIS applications. The extracted Cole parameters then can be analysed as descriptors of tissue electrical properties. To have a better evaluation of physiological or pathological properties of biological tissue, accurate extraction of Cole parameters is of great importance. This paper proposes an improved Cole parameter extraction based on bacterial foraging optimization (BFO) algorithm. We employed simulated datasets to test the performance of the BFO fitting method regarding parameter extraction accuracy and noise sensitivity, and we compared the results with those of a least squares (LS) fitting method. The BFO method showed better robustness to the noise and higher accuracy in terms of extracted parameters. In addition, we applied our method to experimental data where bioimpedance measurements were obtained from forearm in three different positions of the arm. The goal of the experiment was to explore how robust Cole parameters are in classifying position of the arm for different people, and measured at different times. The extracted Cole parameters obtained by LS and BFO methods were applied to different classifiers. Two other evolutionary algorithms, GA and PSO were also used for comparison purpose. We showed that when the classifiers are fed with the extracted feature sets by BFO fitting method, higher accuracy is obtained both when applying on training data and test data.
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil
2017-04-01
Reliable temporal modelling of groundwater level is significant for efficient water resources management in hydrological basins and for the prevention of possible desertification effects. In this work we propose a stochastic data driven approach of temporal monitoring and prediction that can incorporate auxiliary information. More specifically, we model the temporal (mean annual and biannual) variation of groundwater level by means of a discrete time autoregressive exogenous variable model (ARX model). The ARX model parameters and its predictions are estimated by means of the Kalman filter adaptation algorithm (KFAA). KFAA is suitable for sparsely monitored basins that do not allow for an independent estimation of the ARX model parameters. Three new modified versions of the original form of the ARX model are proposed and investigated: the first considers a larger time scale, the second a larger time delay in terms of the groundwater level input and the third considers the groundwater level difference between the last two hydrological years, which is incorporated in the model as a third input variable. We apply KFAA to time series of groundwater level values from Mires basin in the island of Crete. In addition to precipitation measurements, we use pumping data as exogenous variables. We calibrate the ARX model based on the groundwater level for the years 1981 to 2006 and use it to successfully predict the mean annual and biannual groundwater level for recent years (2007-2010).
The Application of Imperialist Competitive Algorithm for Fuzzy Random Portfolio Selection Problem
NASA Astrophysics Data System (ADS)
EhsanHesamSadati, Mir; Bagherzadeh Mohasefi, Jamshid
2013-10-01
This paper presents an implementation of the Imperialist Competitive Algorithm (ICA) for solving the fuzzy random portfolio selection problem where the asset returns are represented by fuzzy random variables. Portfolio Optimization is an important research field in modern finance. By using the necessity-based model, fuzzy random variables reformulate to the linear programming and ICA will be designed to find the optimum solution. To show the efficiency of the proposed method, a numerical example illustrates the whole idea on implementation of ICA for fuzzy random portfolio selection problem.
Fault Detection of Aircraft System with Random Forest Algorithm and Similarity Measure
Park, Wookje; Jung, Sikhang
2014-01-01
Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained. PMID:25057508
Fault detection of aircraft system with random forest algorithm and similarity measure.
Lee, Sanghyuk; Park, Wookje; Jung, Sikhang
2014-01-01
Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained.
Submicron structure random field on granular soil material with retinex algorithm optimization
NASA Astrophysics Data System (ADS)
Liang, Yu; Tao, Chenyuan; Zhou, Bingcheng; Huang, Shuai; Huang, Linchong
2017-06-01
In this paper, a Retinex scale optimized image enhancement algorithm is proposed, which can enhance the micro vision image and eliminate the influence of the uneven illumination. Based on that, a random geometric model of the microstructure of granular materials is established with Monte-Carlo method, the numerical simulation including consolidation process of granular materials is compared with the experimental data. The results have proved that the random field method with Retinex image enhancement algorithm is effective, the image of microstructure of granular materials becomes clear and the contrast ratio is improved, after using Retinex image enhancement algorithm to enhance the CT image. The fidelity of enhanced image is higher than that dealing with other method, which have explained that the algorithm can preserve the microstructure information of the image well. The result of numerical simulation is similar with the one obtained from conventional three axis consolidation test, which proves that the simulation result is reliable.
Stochastic Pseudo-Boolean Optimization
2011-07-31
analysis of two-stage stochastic minimum s-t cut problems; (iv) exact solution algorithm for a class of stochastic bilevel knapsack problems; (v) exact...57 5 Bilevel Knapsack Problems with Stochastic Right-Hand Sides 58 6 Two-Stage Stochastic Assignment Problems 59 6.1 Introduction...programming formulations and related computational complexity issues. • Section 5 considers a specific stochastic extension of the bilevel knapsack
Prados-Privado, María; Prados-Frutos, Juan Carlos; Calvo-Guirado, José Luis; Bea, José Antonio
2016-11-01
To measure fatigue in dental implants and in its components, it is necessary to use a probabilistic analysis since the randomness in the output depends on a number of parameters (such as fatigue properties of titanium and applied loads, unknown beforehand as they depend on mastication habits). The purpose is to apply a probabilistic approximation in order to predict fatigue life, taking into account the randomness of variables. More accuracy on the results has been obtained by taking into account different load blocks with different amplitudes, as happens with bite forces during the day and allowing us to know how effects have different type of bruxism on the piece analysed.
NASA Astrophysics Data System (ADS)
Sivakumar, Krishnamoorthy; Goutsias, John I.
1998-09-01
We study the problem of simulating a class of Gibbs random field models, called morphologically constrained Gibbs random fields, using Markov chain Monte Carlo sampling techniques. Traditional single site updating Markov chain Monte Carlo sampling algorithm, like the Metropolis algorithm, tend to converge extremely slowly when used to simulate these models, particularly at low temperatures and for constraints involving large geometrical shapes. Moreover, the morphologically constrained Gibbs random fields are not, in general, Markov. Hence, a Markov chain Monte Carlo sampling algorithm based on the Gibbs sampler is not possible. We prose a variant of the Metropolis algorithm that, at each iteration, allows multi-site updating and converges substantially faster than the traditional single- site updating algorithm. The set of sites that are updated at a particular iteration is specified in terms of a shape parameter and a size parameter. Computation of the acceptance probability involves a 'test ratio,' which requires computation of the ratio of the probabilities of the current and new realizations. Because of the special structure of our energy function, this computation can be done by means of a simple; local iterative procedure. Therefore lack of Markovianity does not impose any additional computational burden for model simulation. The proposed algorithm has been used to simulate a number of image texture models, both synthetic and natural.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Shushin, A I
2008-03-01
Some specific features and extensions of the continuous-time random-walk (CTRW) approach are analyzed in detail within the Markovian representation (MR) and CTRW-based non-Markovian stochastic Liouville equation (SLE). In the MR, CTRW processes are represented by multidimensional Markovian ones. In this representation the probability density function (PDF) W(t) of fluctuation renewals is associated with that of reoccurrences in a certain jump state of some Markovian controlling process. Within the MR the non-Markovian SLE, which describes the effect of CTRW-like noise on the relaxation of dynamic and stochastic systems, is generalized to take into account the influence of relaxing systems on the statistical properties of noise. Some applications of the generalized non-Markovian SLE are discussed. In particular, it is applied to study two modifications of the CTRW approach. One of them considers cascaded CTRWs in which the controlling process is actually a CTRW-like one controlled by another CTRW process, controlled in turn by a third one, etc. Within the MR a simple expression for the PDF W(t) of the total controlling process is obtained in terms of Markovian variants of controlling PDFs in the cascade. The expression is shown to be especially simple and instructive in the case of anomalous processes determined by the long-time tailed W(t) . The cascaded CTRWs can model the effect of the complexity of a system on the relaxation kinetics (in glasses, fractals, branching media, ultrametric structures, etc.). Another CTRW modification describes the kinetics of processes governed by fluctuating W(t) . Within the MR the problem is analyzed in a general form without restrictive assumptions on the correlations of PDFs of consecutive renewals. The analysis shows that fluctuations of W(t) can strongly affect the kinetics of the process. Possible manifestations of this effect are discussed.
Probabilistic DHP adaptive critic for nonlinear stochastic control systems.
Herzallah, Randa
2013-06-01
Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Káarnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained.
NASA Astrophysics Data System (ADS)
Lu, Dawei; Zhu, Jing; Zou, Ping; Peng, Xinhua; Yu, Yihua; Zhang, Shanmin; Chen, Qun; Du, Jiangfeng
2010-02-01
An important quantum search algorithm based on the quantum random walk performs an oracle search on a database of N items with O(phN) calls, yielding a speedup similar to the Grover quantum search algorithm. The algorithm was implemented on a quantum information processor of three-qubit liquid-crystal nuclear magnetic resonance (NMR) in the case of finding 1 out of 4, and the diagonal elements’ tomography of all the final density matrices was completed with comprehensible one-dimensional NMR spectra. The experimental results agree well with the theoretical predictions.
Lu Dawei; Peng Xinhua; Du Jiangfeng; Zhu Jing; Zou Ping; Yu Yihua; Zhang Shanmin; Chen Qun
2010-02-15
An important quantum search algorithm based on the quantum random walk performs an oracle search on a database of N items with O({radical}(phN)) calls, yielding a speedup similar to the Grover quantum search algorithm. The algorithm was implemented on a quantum information processor of three-qubit liquid-crystal nuclear magnetic resonance (NMR) in the case of finding 1 out of 4, and the diagonal elements' tomography of all the final density matrices was completed with comprehensible one-dimensional NMR spectra. The experimental results agree well with the theoretical predictions.
Eigenvalue density of linear stochastic dynamical systems: A random matrix approach
NASA Astrophysics Data System (ADS)
Adhikari, S.; Pastur, L.; Lytova, A.; Du Bois, J.
2012-02-01
Eigenvalue problems play an important role in the dynamic analysis of engineering systems modeled using the theory of linear structural mechanics. When uncertainties are considered, the eigenvalue problem becomes a random eigenvalue problem. In this paper the density of the eigenvalues of a discretized continuous system with uncertainty is discussed by considering the model where the system matrices are the Wishart random matrices. An analytical expression involving the Stieltjes transform is derived for the density of the eigenvalues when the dimension of the corresponding random matrix becomes asymptotically large. The mean matrices and the dispersion parameters associated with the mass and stiffness matrices are necessary to obtain the density of the eigenvalues in the frameworks of the proposed approach. The applicability of a simple eigenvalue density function, known as the Marenko-Pastur (MP) density, is investigated. The analytical results are demonstrated by numerical examples involving a plate and the tail boom of a helicopter with uncertain properties. The new results are validated using an experiment on a vibrating plate with randomly attached spring-mass oscillators where 100 nominally identical samples are physically created and individually tested within a laboratory framework.
Stochastic Simulation Techniques for Partition Function Approximation of Gibbs Random Field Images
1991-06-01
of Physics C : Solid State Physics , vol. 10, pp. 1379-1388, 1977. [10] F.S. Cohen, "Markov random fields for image modeling and analysis." In Modeling...disorder," Journal of Applied Crystallography, vol. 6, pp. 87-96, 1973. [9] I.G. Enting, "Crystal growth models and Ising models: Disorder points," Journal
NASA Astrophysics Data System (ADS)
Li, Z.; Zhang, Y.
2004-12-01
Stochastic analysis and numerical simulations were carried out to study the temporal scaling in the time series of water table fluctuations in a one-dimensional heterogeneous aquifer under spatial-temporal random recharge. It was found in our previous study that scaling of water table fluctuations may exist and the fractal dimensions varies over space based on spectral analyses of the hourly hydraulic head (h) data observed over a four-year period at seven monitoring wells in the Walnut Creek watershed in Iowa. The estimated baseflow in the Walnut Creek and other four watersheds has temporal scaling, but there exits two distinct slopes with a break at about 30 days in the log frequency and log power spectral density plot. It was also found that the hydraulic head in an aquifer may fluctuate as a fractal in time in response to either a white-noise or a fractal recharge process, depending on how quickly the hydraulic head responds to recharge events and the physical parameters of the aquifer (i.e., transmissivity and specific yield). Numerical simulations were conducted to verify whether or not the hydraulic head and flux (baseflow) behave as fractal processes and if their fractal dimensions vary spatially, using a 1-D transient groundwater flow in heterogeneous aquifer subject to temporal and spatial random recharge (white noise in time and exponential covariance in space). The simulation results confirm the previous findings. We also derived the moment equations which were solved to obtain the mean hydraulic head and mean flux (baseflow). The spectrum for mean hydraulic heads and flux (baseflow) are plotted and analyzed to test our hypotheses and the effects of aquifer heterogeneity and spatial-temporal random recharge process on the head fluctuations and spectrum are presented and discussed.
NASA Astrophysics Data System (ADS)
Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.
2016-09-01
There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21
NASA Astrophysics Data System (ADS)
Kala, Zdeněk
2013-10-01
The paper deals with the statistical analysis of resistance of a hot-rolled steel IPE beam under major axis bending. The lateral-torsional buckling stability problem of imperfect beam is described. The influence of bending moments and warping torsion on the ultimate limit state of the IPE beam with random imperfections is analyzed. The resistance is calculated by means of the close form solution. The initial geometrical imperfections of the beam are considered as the formatively identical to the first eigen mode of buckling. Changes of mean values of the resistance, of mean values of internal bending moments, of the variance of resistance and of the variance of internal bending moments were studied in dependence on the beam non-dimensional slenderness. The values of non-dimensional slenderness for which the statistical characteristics of internal moments associated with random resistance are maximal were determined.
Rare-event Analysis and Computational Methods for Stochastic Systems Driven by Random Fields
2014-12-29
research develops asymptotic theories and numerical methods for computing rare- event probabilities associated with random fields and the associated...dynamics, neuroscience, fiber optics, astronomy , further civil engineering, engineer design, ocean-earth sciences, and so forth. We perform risk analysis...of such systems by investigating the asymptotic behavior of certain interesting rare events . For instance, 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND
Extinction transition in stochastic population dynamics in a random, convective environment
NASA Astrophysics Data System (ADS)
Juhász, Róbert
2013-10-01
Motivated by modeling the dynamics of a population living in a flowing medium where the environmental factors are random in space, we have studied an asymmetric variant of the one-dimensional contact process, where the quenched random reproduction rates are systematically greater in one direction than in the opposite one. The spatial disorder turns out to be a relevant perturbation but, according to results of Monte Carlo simulations, the behavior of the model at the extinction transition is different from the (infinite-randomness) critical behavior of the disordered symmetric contact process. Depending on the strength a of the asymmetry, the critical population drifts either with a finite velocity or with an asymptotically vanishing velocity as x(t) ∼ tμ(a), where μ(a) < 1. Dynamical quantities are non-self-averaging at the extinction transition; the survival probability, for instance, shows multiscaling, i.e. it is characterized by a broad spectrum of effective exponents. For a sufficiently weak asymmetry, a Griffiths phase appears below the extinction transition, where the survival probability decays as a non-universal power of the time while, above the transition, another extended phase emerges, where the front of the population advances anomalously with a diffusion exponent continuously varying with the control parameter.
Liu, Baoshun; Zhao, Xiujian; Yu, Jiaguo; Fujishima, Akira; Nakata, Kazuya
2016-11-23
In the photocatalysis of porous nano-crystalline materials, the transfer of electrons to O2 plays an important role, which includes the electron transport to photocatalytic active centers and successive interfacial transfer to O2. The slowest of them will determine the overall speed of electron transfer in the photocatalysis reaction. Considering the photocatalysis of porous nano-crystalline TiO2 as an example, although some experimental results have shown that the electron kinetics are limited by the interfacial transfer, we still lack the depth of understanding the microscopic mechanism from a theoretical viewpoint. In the present research, a stochastic quasi-equilibrium (QE) theoretical model and a stochastic random walking (RW) model were established to discuss the electron transport and electron interfacial transfer by taking the electron multi-trapping transport and electron interfacial transfer from the photocatalytic active centers to O2 into consideration. By carefully investigating the effect of the electron Fermi level (EF) and the photocatalytic center number on electron transport, we showed that the time taken for an electron to transport to a photocatalytic center predicated by the stochastic RW model was much lower than that predicted by the stochastic QE model, indicating that the electrons cannot reach a QE state during their transport to photocatalytic centers. The stochastic QE model predicted that the electron kinetics of a real photocatalysis for porous nano-crystalline TiO2 should be limited by electron transport, whereas the stochastic RW model showed that the electron kinetics of a real photocatalysis can be limited by the interfacial transfer. Our simulation results show that the stochastic RW model was more in line with the real electron kinetics that have been observed in experiments, therefore it is concluded that the photoinduced electrons cannot reach a QE state before transferring to O2.
NASA Astrophysics Data System (ADS)
Lapko, A. V.; Lapko, V. A.; Yuronen, E. A.
2016-11-01
The new technique of testing of hypothesis of random variables independence is offered. Its basis is made by nonparametric algorithm of pattern recognition. The considered technique doesn't demand sampling of area of values of random variables.
Simple algorithm for the correction of MRI image artefacts due to random phase fluctuations.
Broche, Lionel M; Ross, P James; Davies, Gareth R; Lurie, David J
2017-07-24
Fast Field-Cycling (FFC) MRI is a novel technology that allows varying the main magnetic field B0 during the pulse sequence, from the nominal field (usually hundreds of millitesla) down to Earth's field or below. This technique uses resistive magnets powered by fast amplifiers. One of the challenges with this method is to stabilise the magnetic field during the acquisition of the NMR signal. Indeed, a typical consequence of field instability is small, random phase variations between each line of k-space resulting in artefacts, similar to those which occur due to homogeneous motion but harder to correct as no assumption can be made about the phase error, which appears completely random. Here we propose an algorithm that can correct for the random phase variations induced by field instabilities without prior knowledge about the phase error. The algorithm exploits the fact that ghosts caused by field instability manifest in image regions which should be signal free. The algorithm minimises the signal in the background by finding an optimum phase correction for each line of k-space and repeats the operation until the result converges, leaving the background free of signal. We showed the conditions for which the algorithm is robust and successfully applied it on images acquired on FFC-MRI scanners. The same algorithm can be used for various applications other than Fast Field-Cycling MRI. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Wang, Shyh J.
1992-01-01
This paper presents a method for utilizing artificial neural networks for direct adaptive control of dynamic systems with poorly known dynamics. The neural network weights (controller gains) are adapted in real time using state measurements and a random search optimization algorithm. The results are demonstrated via simulation using two highly nonlinear systems.
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Wang, Shyh J.
1992-01-01
This paper presents a method for utilizing artificial neural networks for direct adaptive control of dynamic systems with poorly known dynamics. The neural network weights (controller gains) are adapted in real time using state measurements and a random search optimization algorithm. The results are demonstrated via simulation using two highly nonlinear systems.
Stochastic differential equations
Sobczyk, K. )
1990-01-01
This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshore structures.
Bad News Comes in Threes: Stochastic Structure in Random Events (Invited)
NASA Astrophysics Data System (ADS)
Newman, W. I.; Turcotte, D. L.; Malamud, B. D.
2013-12-01
Plots of random numbers have been known for nearly a century to show repetitive peak-to-peak sequences with an average length of 3. Geophysical examples include events such as earthquakes, geyser eruptions, and magnetic substorms. We consider a classic model in statistical physics, the Langevin equation x[n+1] = α*x[n] + η[n], where x[n] is the nth value of a measured quantity and η[n] is a random number, commonly a Gaussian white noise. Here, α is a parameter that ranges from 0, corresponding to independent random data, to 1, corresponding to Brownian motion which preserves memory of past steps. We show that, for α = 0, the mean peak-to-peak sequence length is 3 while, for α = 1, the mean sequence length is 4. We obtain the physical and mathematical properties of this model, including the distribution of peak-to-peak sequence lengths that can be expected. We compare the theory with observations of earthquake magnitudes emerging from large events, observations of the auroral electrojet index as a measure of global electrojet activity, and time intervals observed between successive eruptions of Old Faithful Geyser in Yellowstone National Park. We demonstrate that the largest earthquake events as described by their magnitudes are consistent with our theory for α = 0, thereby confronting the aphorism (and our analytic theory) that "bad news comes in threes." Electrojet activity, on the other hand, demonstrates some memory effects, consistent with the intuitive picture of the magnetosphere presenting a capacitor-plate like system that preserves memory. Old Faithful Geyser, finally, shows strong antipersistence effects between successive events, i.e. long-time intervals are followed by short ones, and vice versa. As an additional application, we apply our theory to the observed 3-4 year mammalian population cycles.
NASA Astrophysics Data System (ADS)
Kawai, Reiichiro
2012-02-01
Continuous-time modeling of random searches is designed to be robust to the sampling rate while the spatial model is required to be of rotation-invariant type, which is often computationally prohibitive. Such computational difficulty may be circumvented by employing a model with independent components. We demonstrate that its disadvantages in statistical properties are blurred under lower frequency. We propose a quantitative criterion for choice of the sampling rate at which a spatial model with independent components resembles a rotation-invariant model. Our findings have the potential to assist the observer to employ simpler models in the continuous-time framework to avoid expensive computation required for statistical inference.
Stochastic modeling of carbon oxidation
Chen, W.Y,; Kulkarni, A.; Milum, J.L.; Fan, L.T.
1999-12-01
Recent studies of carbon oxidation by scanning tunneling microscopy indicate that measured rates of carbon oxidation can be affected by randomly distributed defects in the carbon structure, which vary in size. Nevertheless, the impact of this observation on the analysis or modeling of the oxidation rate has not been critically assessed. This work focuses on the stochastic analysis of the dynamics of carbon clusters' conversions during the oxidation of a carbon sheet. According to the classic model of Nagle and Strickland-Constable (NSC), two classes of carbon clusters are involved in three types of reactions: gasification of basal-carbon clusters, gasification of edge-carbon clusters, and conversion of the edge-carbon clusters to the basal-carbon clusters due to the thermal annealing. To accommodate the dilution of basal clusters, however, the NSC model is modified for the later stage of oxidation in this work. Master equations governing the numbers of three classes of carbon clusters, basal, edge and gasified, are formulated from stochastic population balance. The stochastic pathways of three different classes of carbon during oxidation, that is, their means and the fluctuations around these means, have been numerically simulated independently by the algorithm derived from the master equations, as well as by an event-driven Monte Carlo algorithm. Both algorithms have given rise to identical results.
High-resolution climate data over conterminous US using random forest algorithm
NASA Astrophysics Data System (ADS)
Hashimoto, H.; Nemani, R. R.; Wang, W.
2014-12-01
We developed a new methodology to create high-resolution precipitation data using the random forest algorithm. We have used two approaches: physical downscaling from GCM data using a regional climate model, and interpolation from ground observation data. Physical downscaling method can be applied only for a small region because it is computationally expensive and complex to deploy. On the other hand, interpolation schemes from ground observations do not consider physical processes. In this study, we utilized the random forest algorithm to integrate atmospheric reanalysis data, satellite data, topography data, and ground observation data. First we considered situations where precipitation is same across the domain, largely dominated by storm like systems. We then picked several points to train random forest algorithm. The random forest algorithm estimates out-of-bag errors spatially, and produces the relative importance of each of the input variable.This methodology has the following advantages. (1) The methodology can ingest any spatial dataset to improve downscaling. Even non-precipitation datasets can be ingested such as satellite cloud cover data, radar reflectivity image, or modeled convective available potential energy. (2) The methodology is purely statistical so that physical assumptions are not required. Meanwhile, most of interpolation schemes assume empirical relationship between precipitation and elevation for orographic precipitation. (3) Low quality value in ingested data does not cause critical bias in the results because of the ensemble feature of random forest. Therefore, users do not need to pay a special attention to quality control of input data compared to other interpolation methodologies. (4) Same methodology can be applied to produce other high-resolution climate datasets, such as wind and cloud cover. Those variables are usually hard to be interpolated by conventional algorithms. In conclusion, the proposed methodology can produce reasonable
Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert. A.
2016-01-01
Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2n, (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21 patient image sections, were analyzed. The automated algorithm produced segmentation of seven material classes with a median DSC of 0.86 ± 0.03 for pediatric patient protocols, and 0.85 ± 0.04 for adult patient protocols. Additionally, 100 randomly selected patient examinations were segmented and analyzed, and a mean sensitivity of 0.91 (range: 0.82–0.98), specificity of 0.89 (range: 0.70–0.98), and accuracy of 0.90 (range: 0.76–0.98) were demonstrated. In this study, we demonstrate that this fully automated segmentation tool was able to produce fast and accurate segmentation of the neck and trunk of the body over a wide range of patient habitus
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Biased Random-Key Genetic Algorithms for the Winner Determination Problem in Combinatorial Auctions.
de Andrade, Carlos Eduardo; Toso, Rodrigo Franco; Resende, Mauricio G C; Miyazawa, Flávio Keidi
2015-01-01
In this paper we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.
Miller, Ross H; Gillette, Jason C; Derrick, Timothy R; Caldwell, Graham E
2009-04-01
Muscle forces during locomotion are often predicted using static optimisation and SQP. SQP has been criticised for over-estimating force magnitudes and under-estimating co-contraction. These problems may be related to SQP's difficulty in locating the global minimum to complex optimisation problems. Algorithms designed to locate the global minimum may be useful in addressing these problems. Muscle forces for 18 flexors and extensors of the lower extremity were predicted for 10 subjects during the stance phase of running. Static optimisation using SQP and two random search (RS) algorithms (a genetic algorithm and simulated annealing) estimated muscle forces by minimising the sum of cubed muscle stresses. The RS algorithms predicted smaller peak forces (42% smaller on average) and smaller muscle impulses (46% smaller on average) than SQP, and located solutions with smaller cost function scores. Results suggest that RS may be a more effective tool than SQP for minimising the sum of cubed muscle stresses in static optimisation.
Daniels, Noah M; Gallant, Andrew; Ramsey, Norman; Cowen, Lenore J
2015-01-01
We introduce MRFy, a tool for protein remote homology detection that captures beta-strand dependencies in the Markov random field. Over a set of 11 SCOP beta-structural superfamilies, MRFy shows a 14 percent improvement in mean Area Under the Curve for the motif recognition problem as compared to HMMER, 25 percent improvement as compared to RAPTOR, 14 percent improvement as compared to HHPred, and a 18 percent improvement as compared to CNFPred and RaptorX. MRFy was implemented in the Haskell functional programming language, and parallelizes well on multi-core systems. MRFy is available, as source code as well as an executable, from http://mrfy.cs.tufts.edu/.
NASA Astrophysics Data System (ADS)
Becker, Markus; Weerawardane, Thushara Lanka; Li, Xi; Görg, Carmelita
Pseudo Random Number Generators (PRNG) are the base for stochastic simulations. The usage of good generators is essential for valid simulation results. OPNET Modeler a well-known tool for simulation of communication networks provides a Pseudo Random Number Generator. The extension of OPNET Modeler with external generators and additional statistical evaluation methods that has been performed for this paper increases the flexibility and options in the simulation studies performed.
Helaers, Raphaël; Milinkovitch, Michel C
2010-07-15
The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s) but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood), including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA) together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these algorithms. MetaPIGA v2.0 gives access both to high
Stochastic polarized line formation. I. Zeeman propagation matrix in a random magnetic field
NASA Astrophysics Data System (ADS)
Frisch, H.; Sampoorna, M.; Nagendra, K. N.
2005-10-01
This paper considers the effect of a random magnetic field on Zeeman line transfer, assuming that the scales of fluctuations of the random field are much smaller than photon mean free paths associated to the line formation (micro-turbulent limit). The mean absorption and anomalous dispersion coefficients are calculated for random fields with a given mean value, isotropic or anisotropic Gaussian distributions azimuthally invariant about the direction of the mean field. Following Domke & Pavlov (1979, Ap&SS, 66, 47), the averaging process is carried out in a reference frame defined by the direction of the mean field. The main steps are described in detail. They involve the writing of the Zeeman matrix in the polarization matrix representation of the radiation field and a rotation of the line of sight reference frame. Three types of fluctuations are considered : fluctuations along the direction of the mean field, fluctuations perpendicular to the mean field, and isotropic fluctuations. In each case, the averaging method is described in detail and fairly explicit expressions for the mean coefficients are established, most of which were given in Dolginov & Pavlov (1972, Soviet Ast., 16, 450) or Domke & Pavlov (1979, Ap&SS, 66, 47). They include the effect of a microturbulent velocity field with zero mean and a Gaussian distribution. A detailed numerical investigation of the mean coefficients illustrates the two effects of magnetic field fluctuations: broadening of the σ-components by fluctuations of the magnetic field intensity, leaving the π-components unchanged, and averaging over the angular dependence of the π and σ components. For longitudinal fluctuations only the first effect is at play. For isotropic and perpendicular fluctuations, angular averaging can modify the frequency profiles of the mean coefficients quite drastically with the appearance of an unpolarized central component in the diagonal absorption coefficient, even when the mean field is in
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis.
Autoclassification of the Variable 3XMM Sources Using the Random Forest Machine Learning Algorithm
NASA Astrophysics Data System (ADS)
Farrell, Sean A.; Murphy, Tara; Lo, Kitty K.
2015-11-01
In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of a random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.
NASA Astrophysics Data System (ADS)
Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin
2009-10-01
Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.
Scintillation index of a stochastic electromagnetic beam propagating in random media
NASA Astrophysics Data System (ADS)
Korotkova, Olga
2008-05-01
We study the behavior of the scintillation index (the normalized variance of fluctuating intensity) of a wide-sense statistically stationary, quasi-monochromatic, electromagnetic beam propagating in a homogeneous isotropic medium. In particular, we show that in the case when the beam is treated electromagnetically apart from the correlation properties of the medium in which the beam travels not only its degree of coherence but also its degree of polarization in the source plane can affect the values of the scintillation index along the propagation path. We find that, generally, beams generated by unpolarized sources have reduced level of scintillation, compared with beams generated by fully polarized sources, provided they have the same intensity distribution and the same state of coherence in the source plane. An example illustrating the theory is considered which examines how the scintillation index of an electromagnetic Gaussian Schell-model beam propagates in the turbulent atmosphere. These results may find applications in optical communications through random media and in remote sensing.
Nowacki, Amy S; Zhao, Wenle; Palesch, Yuko Y
2015-01-12
Response-adaptive randomization (RAR) offers clinical investigators benefit by modifying the treatment allocation probabilities to optimize the ethical, operational, or statistical performance of the trial. Delayed primary outcomes and their effect on RAR have been studied in the literature; however, the incorporation of surrogate outcomes has not been fully addressed. We explore the benefits and limitations of surrogate outcome utilization in RAR in the context of acute stroke clinical trials. We propose a novel surrogate-primary (S-P) replacement algorithm where a patient's surrogate outcome is used in the RAR algorithm only until their primary outcome becomes available to replace it. Computer simulations investigate the effect of both the delay in obtaining the primary outcome and the underlying surrogate and primary outcome distributional discrepancies on complete randomization, standard RAR and the S-P replacement algorithm methods. Results show that when the primary outcome is delayed, the S-P replacement algorithm reduces the variability of the treatment allocation probabilities and achieves stabilization sooner. Additionally, the S-P replacement algorithm benefit proved to be robust in that it preserved power and reduced the expected number of failures across a variety of scenarios.
Nonconvergence of the Wang-Landau algorithms with multiple random walkers.
Belardinelli, R E; Pereyra, V D
2016-05-01
This paper discusses some convergence properties in the entropic sampling Monte Carlo methods with multiple random walkers, particularly in the Wang-Landau (WL) and 1/t algorithms. The classical algorithms are modified by the use of m-independent random walkers in the energy landscape to calculate the density of states (DOS). The Ising model is used to show the convergence properties in the calculation of the DOS, as well as the critical temperature, while the calculation of the number π by multiple dimensional integration is used in the continuum approximation. In each case, the error is obtained separately for each walker at a fixed time, t; then, the average over m walkers is performed. It is observed that the error goes as 1/sqrt[m]. However, if the number of walkers increases above a certain critical value m>m_{x}, the error reaches a constant value (i.e., it saturates). This occurs for both algorithms; however, it is shown that for a given system, the 1/t algorithm is more efficient and accurate than the similar version of the WL algorithm. It follows that it makes no sense to increase the number of walkers above a critical value m_{x}, since it does not reduce the error in the calculation. Therefore, the number of walkers does not guarantee convergence.
Reducing the variability in random-phase initialized Gerchberg-Saxton Algorithm
NASA Astrophysics Data System (ADS)
Salgado-Remacha, Francisco Javier
2016-11-01
Gerchberg-Saxton Algorithm is a common tool for designing Computer Generated Holograms. There exist some standard functions for evaluating the quality of the final results. However, the use of randomized initial guess leads to different results, increasing the variability of the evaluation functions values. This fact is especially detrimental when the computing time is elevated. In this work, a new tool is presented, able to describe the fidelity of the results with a notably reduced variability after multiple attempts of the Gerchberg-Saxton Algorithm. This new tool results very helpful for topical fields such as 3D digital holography.
Parrallel Implementation of Fast Randomized Algorithms for Low Rank Matrix Decomposition
Lucas, Andrew J.; Stalizer, Mark; Feo, John T.
2014-03-01
We analyze the parallel performance of randomized interpolative decomposition by de- composing low rank complex-valued Gaussian random matrices larger than 100 GB. We chose a Cray XMT supercomputer as it provides an almost ideal PRAM model permitting quick investigation of parallel algorithms without obfuscation from hardware idiosyncrasies. We obtain that on non-square matrices performance scales almost linearly with runtime about 100 times faster on 128 processors. We also verify that numerically discovered error bounds still hold on matrices two orders of magnitude larger than those previously tested.
Resolution for Stochastic Boolean Satisfiability
NASA Astrophysics Data System (ADS)
Teige, Tino; Fränzle, Martin
The stochastic Boolean satisfiability (SSAT) problem was introduced by Papadimitriou in 1985 by adding a probabilistic model of uncertainty to propositional satisfiability through randomized quantification. SSAT has many applications, e.g., in probabilistic planning and, more recently by integrating arithmetic, in probabilistic model checking. In this paper, we first present a new result on the computational complexity of SSAT: SSAT remains PSPACE-complete even for its restriction to 2CNF. Second, we propose a sound and complete resolution calculus for SSAT complementing the classical backtracking search algorithms.
NASA Astrophysics Data System (ADS)
Ross, D. K.; Moreau, William
1995-08-01
We investigate stochastic gravity as a potentially fruitful avenue for studying quantum effects in gravity. Following the approach of stochastic electrodynamics ( sed), as a representation of the quantum gravity vacuum we construct a classical state of isotropic random gravitational radiation, expressed as a spin-2 field,h µυ (x), composed of plane waves of random phase on a flat spacetime manifold. Requiring Lorentz invariance leads to the result that the spectral composition function of the gravitational radiation,h(ω), must be proportional to 1/ω 2. The proportionality constant is determined by the Planck condition that the energy density consist ofħω/2 per normal mode, and this condition sets the amplitude scale of the random gravitational radiation at the order of the Planck length, giving a spectral composition functionh(ω) =√16πc 2Lp/ω2. As an application of stochastic gravity, we investigate the Davies-Unruh effect. We calculate the two-point correlation function (R iojo(Oτ-δτ/2)R kolo(O,τ+δτ/2)) of the measureable geodesic deviation tensor field,R iojo, for two situations: (i) at a point detector uniformly accelerating through the random gravitational radiation, and (ii) at an inertial detector in a heat bath of the random radiation at a finite temperature. We find that the two correlation functions agree to first order inaδτ/c provided that the temperature and acceleration satisfy the relationkT=ħa/2πc.
NASA Astrophysics Data System (ADS)
Tanizawa, Ken; Hirose, Akira
Adaptive polarization mode dispersion (PMD) compensation is required for the speed-up and advancement of the present optical communications. The combination of a tunable PMD compensator and its adaptive control method achieves adaptive PMD compensation. In this paper, we report an effective search control algorithm for the feedback control of the PMD compensator. The algorithm is based on the hill-climbing method. However, the step size changes randomly to prevent the convergence from being trapped at a local maximum or a flat, unlike the conventional hill-climbing method. The randomness depends on the Gaussian probability density functions. We conducted transmission simulations at 160Gb/s and the results show that the proposed method provides more optimal compensator control than the conventional hill-climbing method.
A partially reflecting random walk on spheres algorithm for electrical impedance tomography
Maire, Sylvain; Simon, Martin
2015-12-15
In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance of the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.
NASA Astrophysics Data System (ADS)
Khokhar, Zahid R.; Ashcroft, Ian A.; Silberschmidt, Vadim V.
2014-02-01
Laminated carbon fibre-reinforced polymer (CFRP) composites are already well established in structural applications where high specific strength and stiffness are required. Damage in these laminates is usually localised and may involve numerous mechanisms, such as matrix cracking, laminate delamination, fibre de-bonding or fibre breakage. Microstructures in CFRPs are non-uniform and irregular, resulting in an element of randomness in the localised damage. This may in turn affect the global properties and failure parameters of components made of CFRPs. This raises the question of whether the inherent stochasticity of localised damage is of significance in terms of the global properties and design methods for such materials. This paper presents a numerical modelling based analysis of the effect of material randomness on delamination damage in CFRP materials by the implementation of a stochastic cohesive-zone model (CZM) within the framework of the finite-element (FE) method. The initiation and propagation of delamination in a unidirectional CFRP double-cantilever beam (DCB) specimen loaded under mode-I was analyzed, accounting for the inherent microstructural stochasticity exhibited by such laminates via the stochastic CZM. Various statistical realizations for a half-scatter of 50 % of fracture energy were performed, with a probability distribution based on Weibull's two-parameter probability density function. The damaged area and the crack lengths in laminates were analyzed, and the results showed higher values of those parameters for random realizations compared to the uniform case for the same levels of applied displacement. This indicates that deterministic analysis of composites using average properties may be non-conservative and a method based on probability may be more appropriate.
Modzelewski, Romain; de la Rue, Thierry; Janvresse, Elise; Hitzel, Anne; Menard, Jean François; Manrique, Alain; Gardin, Isabelle; Gerardin, Emmanuel; Hannequin, Didier; Vera, Pierre
2008-01-01
Heterogeneity analysis has been studied for radiological imaging, but few methods have been developed for functional images. Diffuse heterogeneous perfusion frequently appears in brain single photon emission computed tomography (SPECT) images, but objective quantification is lacking. An automatic method, based on random walk (RW) theory, has been developed to quantify perfusion heterogeneity. We assess the robustness of our algorithm in differentiating levels of diffuse heterogeneity even when focal defects are present. Heterogeneity is quantified by counting R (percentage), the mean rate of visited pixels in a fixed number of steps of the stochastic RW process. The algorithm has been tested on the numerical anthropomorphic Zubal head phantom. Seven diffuse cortical heterogeneity levels were simulated with an adjustable Gaussian function and 6 temporoparietal focal defects simulating Alzheimer Disease, leading to 42 phantoms. Data were projected and smoothed (full width at half maximum, 5.5 mm), and Poisson noise was added to the 64 projections. The SPECT data were reconstructed using filtered backprojection (Hamming filter, 0.5 c/p). R values for different levels of perfusion defect and diffuse heterogeneity were evaluated on 3 parameters: the number of slices studied (20 vs 40), the use of Talairach normalization versus original space, and the use of a cortical mask within the Talairach space. For each parameter, regression lines for heterogeneity and temporoparietal defect quantification were analyzed by covariance statistics. R values were also evaluated on SPECT images performed on 25 subjects with suspected focal dementia and on 15 normal controls. Scans were blindly ranked by 2 experienced nuclear physicians according to the degree of diffuse heterogeneity. Variability of R was smaller than 0.17% for repeated measurements. R was more particularly influenced by diffuse heterogeneity compared with focal perfusion defect. The Talairach normalization had a
Quasi-Random Algorithms for Real-Time Spacecraft Motion Planning and Formation Flight
NASA Astrophysics Data System (ADS)
Frazzoli, E.
Many applications of current interest, including on-orbit servicing of large space structures, space-based interferometry, and distributed radar systems, involve several spacecraft maneuvering in close proximity of one another. Often, the mission requires that the spacecraft be able to react quickly to changes in the environment, for example to reconfigure the formation to investigate a target of opportunity, or to prevent damage from a failure. In these cases, the spacecraft need to solve in real time complex motion planning problems, minimizing meaningful cost functions, such as time or fuel consumption, and taking into account constraints such as collision and plume impingement avoidance. Such problems are provably hard from a computational point of view, in the sense that any deterministic, complete algorithm will require exponential time to find a feasible solution. Recent advances in the robotics field have provided a new class of algorithms based on randomization, which provides computational tractability, by relaxing the completeness requirement to probabilistic completeness (i.e. the solution will be found by such algorithms with arbitrarily high probability in polynomial time). Randomized algorithms have been developed and successfully applied by the author and other researchers to real-time motion planning problems involving autonomous air vehicles and spacecraft attitude motion. In this paper we present a new class of quasi- random algorithms, which, combining optimal orbital maneuvers and deterministic sampling strategies, are able to provide extremely fast and efficient planners. Moreover, these planners are able to guarantee the safety of the space system, that is the satisfaction of collision and plume impingement avoidance constraints, even in the face of finite computation times (i.e., when the planner has to be pre-empted). Formation reconfiguration examples will illustrate the effectiveness of the methods, and a discussion of the results will
Elfering, Achim; Burger, Christian; Schade, Volker; Radlinger, Lorenz
2016-01-01
AIM To investigate the acute effects of stochastic resonance whole body vibration (SR-WBV), including muscle relaxation and cardiovascular activation. METHODS Sixty-four healthy students participated. The participants were randomly assigned to sham SR-WBV training at a low intensity (1.5 Hz) or a verum SR-WBV training at a higher intensity (5 Hz). Systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR) and self-reported muscle relaxation were assessed before and immediately after SR-WBV. RESULTS Two factorial analyses of variance (ANOVA) showed a significant interaction between pre- vs post-SR-WBV measurements and SR-WBV conditions for muscle relaxation in the neck and back [F(1,55) = 3.35, P = 0.048, η2 = 0.07]. Muscle relaxation in the neck and back increased in verum SR-WBV, but not in sham SR-WBV. No significant changes between pre- and post-training levels of SBD, DBD and HR were observed either in sham or verum SR-WBV conditions. With verum SR-WBV, improved muscle relaxation was the most significant in participants who reported the experience of back, neck or shoulder pain more than once a month (P < 0.05). CONCLUSION A single session of SR-WBV increased muscle relaxation in young healthy individuals, while cardiovascular load was low. An increase in musculoskeletal relaxation in the neck and back is a potential mediator of pain reduction in preventive worksite SR-WBV trials. PMID:27900274
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1982-01-01
An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.
NASA Astrophysics Data System (ADS)
Farhi, Edward; Gosset, David; Hen, Itay; Sandvik, A. W.; Shor, Peter; Young, A. P.; Zamponi, Francesco
2012-11-01
In this paper we study the performance of the quantum adiabatic algorithm on random instances of two combinatorial optimization problems, 3-regular 3-XORSAT and 3-regular max-cut. The cost functions associated with these two clause-based optimization problems are similar as they are both defined on 3-regular hypergraphs. For 3-regular 3-XORSAT the clauses contain three variables and for 3-regular max-cut the clauses contain two variables. The quantum adiabatic algorithms we study for these two problems use interpolating Hamiltonians which are amenable to sign-problem free quantum Monte Carlo and quantum cavity methods. Using these techniques we find that the quantum adiabatic algorithm fails to solve either of these problems efficiently, although for different reasons.
Representation of high frequency Space Shuttle data by ARMA algorithms and random response spectra
NASA Technical Reports Server (NTRS)
Spanos, P. D.; Mushung, L. J.
1990-01-01
High frequency Space Shuttle lift-off data are treated by autoregressive (AR) and autoregressive-moving-average (ARMA) digital algorithms. These algorithms provide useful information on the spectral densities of the data. Further, they yield spectral models which lend themselves to incorporation to the concept of the random response spectrum. This concept yields a reasonably smooth power spectrum for the design of structural and mechanical systems when the available data bank is limited. Due to the non-stationarity of the lift-off event, the pertinent data are split into three slices. Each of the slices is associated with a rather distinguishable phase of the lift-off event, where stationarity can be expected. The presented results are rather preliminary in nature; it is aimed to call attention to the availability of the discussed digital algorithms and to the need to augment the Space Shuttle data bank as more flights are completed.
The backtracking survey propagation algorithm for solving random K-SAT problems
Marino, Raffaele; Parisi, Giorgio; Ricci-Tersenghi, Federico
2016-01-01
Discrete combinatorial optimization has a central role in many scientific disciplines, however, for hard problems we lack linear time algorithms that would allow us to solve very large instances. Moreover, it is still unclear what are the key features that make a discrete combinatorial optimization problem hard to solve. Here we study random K-satisfiability problems with K=3,4, which are known to be very hard close to the SAT-UNSAT threshold, where problems stop having solutions. We show that the backtracking survey propagation algorithm, in a time practically linear in the problem size, is able to find solutions very close to the threshold, in a region unreachable by any other algorithm. All solutions found have no frozen variables, thus supporting the conjecture that only unfrozen solutions can be found in linear time, and that a problem becomes impossible to solve in linear time when all solutions contain frozen variables. PMID:27694952
The backtracking survey propagation algorithm for solving random K-SAT problems
NASA Astrophysics Data System (ADS)
Marino, Raffaele; Parisi, Giorgio; Ricci-Tersenghi, Federico
2016-10-01
Discrete combinatorial optimization has a central role in many scientific disciplines, however, for hard problems we lack linear time algorithms that would allow us to solve very large instances. Moreover, it is still unclear what are the key features that make a discrete combinatorial optimization problem hard to solve. Here we study random K-satisfiability problems with K=3,4, which are known to be very hard close to the SAT-UNSAT threshold, where problems stop having solutions. We show that the backtracking survey propagation algorithm, in a time practically linear in the problem size, is able to find solutions very close to the threshold, in a region unreachable by any other algorithm. All solutions found have no frozen variables, thus supporting the conjecture that only unfrozen solutions can be found in linear time, and that a problem becomes impossible to solve in linear time when all solutions contain frozen variables.
NASA Astrophysics Data System (ADS)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy
NASA Astrophysics Data System (ADS)
Yang, Yu; Dong, Bin; Wen, Zaiwen
2017-02-01
In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.
An efficient randomized algorithm for contact-based NMR backbone resonance assignment.
Kamisetty, Hetunandan; Bailey-Kellogg, Chris; Pandurangan, Gopal
2006-01-15
Backbone resonance assignment is a critical bottleneck in studies of protein structure, dynamics and interactions by nuclear magnetic resonance (NMR) spectroscopy. A minimalist approach to assignment, which we call 'contact-based', seeks to dramatically reduce experimental time and expense by replacing the standard suite of through-bond experiments with the through-space (nuclear Overhauser enhancement spectroscopy, NOESY) experiment. In the contact-based approach, spectral data are represented in a graph with vertices for putative residues (of unknown relation to the primary sequence) and edges for hypothesized NOESY interactions, such that observed spectral peaks could be explained if the residues were 'close enough'. Due to experimental ambiguity, several incorrect edges can be hypothesized for each spectral peak. An assignment is derived by identifying consistent patterns of edges (e.g. for alpha-helices and beta-sheets) within a graph and by mapping the vertices to the primary sequence. The key algorithmic challenge is to be able to uncover these patterns even when they are obscured by significant noise. This paper develops, analyzes and applies a novel algorithm for the identification of polytopes representing consistent patterns of edges in a corrupted NOESY graph. Our randomized algorithm aggregates simplices into polytopes and fixes inconsistencies with simple local modifications, called rotations, that maintain most of the structure already uncovered. In characterizing the effects of experimental noise, we employ an NMR-specific random graph model in proving that our algorithm gives optimal performance in expected polynomial time, even when the input graph is significantly corrupted. We confirm this analysis in simulation studies with graphs corrupted by up to 500% noise. Finally, we demonstrate the practical application of the algorithm on several experimental beta-sheet datasets. Our approach is able to eliminate a large majority of noise edges and to
NASA Astrophysics Data System (ADS)
Yan, Bailu; Zhao, Zheng; Zhou, Yingcheng; Yuan, Wenyan; Li, Jian; Wu, Jun; Cheng, Daojian
2017-10-01
Swarm intelligence optimization algorithms are mainstream algorithms for solving complex optimization problems. Among these algorithms, the particle swarm optimization (PSO) algorithm has the advantages of fast computation speed and few parameters. However, PSO is prone to premature convergence. To solve this problem, we develop a new PSO algorithm (RPSOLF) by combining the characteristics of random learning mechanism and Levy flight. The RPSOLF algorithm increases the diversity of the population by learning from random particles and random walks in Levy flight. On the one hand, we carry out a large number of numerical experiments on benchmark test functions, and compare these results with the PSO algorithm with Levy flight (PSOLF) algorithm and other PSO variants in previous reports. The results show that the optimal solution can be found faster and more efficiently by the RPSOLF algorithm. On the other hand, the RPSOLF algorithm can also be applied to optimize the Lennard-Jones clusters, and the results indicate that the algorithm obtains the optimal structure (2-60 atoms) with an extraordinary high efficiency. In summary, RPSOLF algorithm proposed in our paper is proved to be an extremely effective tool for global optimization.
Enhancing network robustness against targeted and random attacks using a memetic algorithm
NASA Astrophysics Data System (ADS)
Tang, Xianglong; Liu, Jing; Zhou, Mingxing
2015-08-01
In the past decades, there has been much interest in the elasticity of infrastructures to targeted and random attacks. In the recent work by Schneider C. M. et al., Proc. Natl. Acad. Sci. U.S.A., 108 (2011) 3838, the authors proposed an effective measure (namely R, here we label it as R t to represent the measure for targeted attacks) to evaluate network robustness against targeted node attacks. Using a greedy algorithm, they found that the optimal structure is an onion-like one. However, real systems are often under threats of both targeted attacks and random failures. So, enhancing networks robustness against both targeted and random attacks is of great importance. In this paper, we first design a random-robustness index (Rr) . We find that the onion-like networks destroyed the original strong ability of BA networks in resisting random attacks. Moreover, the structure of an R r -optimized network is found to be different from that of an onion-like network. To design robust scale-free networks (RSF) which are resistant to both targeted and random attacks (TRA) without changing the degree distribution, a memetic algorithm (MA) is proposed, labeled as \\textit{MA-RSF}\\textit{TRA} . In the experiments, both synthetic scale-free networks and real-world networks are used to validate the performance of \\textit{MA-RSF}\\textit{TRA} . The results show that \\textit{MA-RSF} \\textit{TRA} has a great ability in searching for the most robust network structure that is resistant to both targeted and random attacks.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
NASA Astrophysics Data System (ADS)
Rusakov, Oleg; Laskin, Michael
2017-06-01
We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.
between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...
between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...
Fast approximate stochastic tractography.
Iglesias, Juan Eugenio; Thompson, Paul M; Liu, Cheng-Yi; Tu, Zhuowen
2012-01-01
Many different probabilistic tractography methods have been proposed in the literature to overcome the limitations of classical deterministic tractography: (i) lack of quantitative connectivity information; and (ii) robustness to noise, partial volume effects and selection of seed region. However, these methods rely on Monte Carlo sampling techniques that are computationally very demanding. This study presents an approximate stochastic tractography algorithm (FAST) that can be used interactively, as opposed to having to wait several minutes to obtain the output after marking a seed region. In FAST, tractography is formulated as a Markov chain that relies on a transition tensor. The tensor is designed to mimic the features of a well-known probabilistic tractography method based on a random walk model and Monte-Carlo sampling, but can also accommodate other propagation rules. Compared to the baseline algorithm, our method circumvents the sampling process and provides a deterministic solution at the expense of partially sacrificing sub-voxel accuracy. Therefore, the method is strictly speaking not stochastic, but provides a probabilistic output in the spirit of stochastic tractography methods. FAST was compared with the random walk model using real data from 10 patients in two different ways: 1. the probability maps produced by the two methods on five well-known fiber tracts were directly compared using metrics from the image registration literature; and 2. the connectivity measurements between different regions of the brain given by the two methods were compared using the correlation coefficient ρ. The results show that the connectivity measures provided by the two algorithms are well-correlated (ρ = 0.83), and so are the probability maps (normalized cross correlation 0.818 ± 0.081). The maps are also qualitatively (i.e., visually) very similar. The proposed method achieves a 60x speed-up (7 s vs. 7 min) over the Monte Carlo sampling scheme, therefore
NASA Astrophysics Data System (ADS)
Yu, Haitao; Wang, Jiang; Du, Jiwei; Deng, Bin; Wei, Xile; Liu, Chen
2013-05-01
The effects of time delay and rewiring probability on stochastic resonance and spatiotemporal order in small-world neuronal networks are studied in this paper. Numerical results show that, irrespective of the pacemaker introduced to one single neuron or all neurons of the network, the phenomenon of stochastic resonance occurs. The time delay in the coupling process can either enhance or destroy stochastic resonance on small-world neuronal networks. In particular, appropriately tuned delays can induce multiple stochastic resonances, which appear intermittently at integer multiples of the oscillation period of the pacemaker. More importantly, it is found that the small-world topology can significantly affect the stochastic resonance on excitable neuronal networks. For small time delays, increasing the rewiring probability can largely enhance the efficiency of pacemaker-driven stochastic resonance. We argue that the time delay and the rewiring probability both play a key role in determining the ability of the small-world neuronal network to improve the noise-induced outreach of the localized subthreshold pacemaker.
Yu, Haitao; Wang, Jiang; Du, Jiwei; Deng, Bin; Wei, Xile; Liu, Chen
2013-05-01
The effects of time delay and rewiring probability on stochastic resonance and spatiotemporal order in small-world neuronal networks are studied in this paper. Numerical results show that, irrespective of the pacemaker introduced to one single neuron or all neurons of the network, the phenomenon of stochastic resonance occurs. The time delay in the coupling process can either enhance or destroy stochastic resonance on small-world neuronal networks. In particular, appropriately tuned delays can induce multiple stochastic resonances, which appear intermittently at integer multiples of the oscillation period of the pacemaker. More importantly, it is found that the small-world topology can significantly affect the stochastic resonance on excitable neuronal networks. For small time delays, increasing the rewiring probability can largely enhance the efficiency of pacemaker-driven stochastic resonance. We argue that the time delay and the rewiring probability both play a key role in determining the ability of the small-world neuronal network to improve the noise-induced outreach of the localized subthreshold pacemaker.
What a Difference a Parameter Makes: a Psychophysical Comparison of Random Dot Motion Algorithms
Pilly, Praveen K.; Seitz, Aaron R.
2009-01-01
Random dot motion (RDM) displays have emerged as one of the standard stimulus types employed in psychophysical and physiological studies of motion processing. RDMs are convenient because it is straightforward to manipulate the relative motion energy for a given motion direction in addition to stimulus parameters such as the speed, contrast, duration, density, aperture, etc. However, as widely as RDMs are employed so do they vary in their details of implementation. As a result, it is often difficult to make direct comparisons across studies employing different RDM algorithms and parameters. Here, we systematically measure the ability of human subjects to estimate motion direction for four commonly used RDM algorithms under a range of parameters in order to understand how these different algorithms compare in their perceptibility. We find that parametric and algorithmic differences can produce dramatically different performances. These effects, while surprising, can be understood in relationship to pertinent neurophysiological data regarding spatiotemporal displacement tuning properties of cells in area MT and how the tuning function changes with stimulus contrast and retinal eccentricity. These data help give a baseline by which different RDM algorithms can be compared, demonstrate a need for clearly reporting RDM details in the methods of papers, and also pose new constraints and challenges to models of motion direction processing. PMID:19336240
NASA Astrophysics Data System (ADS)
Ghossein, Elias; Lévesque, Martin
2013-11-01
This paper presents a computationally-efficient algorithm for generating random periodic packings of hard ellipsoids. The algorithm is based on molecular dynamics where the ellipsoids are set in translational and rotational motion and their volumes gradually increase. Binary collision times are computed by simply finding the roots of a non-linear function. In addition, an original and efficient method to compute the collision time between an ellipsoid and a cube face is proposed. The algorithm can generate all types of ellipsoids (prolate, oblate and scalene) with very high aspect ratios (i.e., >10). It is the first time that such packings are reported in the literature. Orientations tensors were computed for the generated packings and it has been shown that ellipsoids had a uniform distribution of orientations. Moreover, it seems that for low aspect ratios (i.e., ⩽10), the volume fraction is the most influential parameter on the algorithm CPU time. For higher aspect ratios, the influence of the latter becomes as important as the volume fraction. All necessary pseudo-codes are given so that the reader can easily implement the algorithm.
NASA Astrophysics Data System (ADS)
Browning, Lauren M.; Lee, Kerry J.; Huang, Tao; Nallathamby, Prakash D.; Lowman, Jill E.; Nancy Xu, Xiao-Hong
2009-09-01
We have synthesized and characterized stable (non-aggregating, non-photobleaching and non-blinking), nearly monodisperse and highly-pure Au nanoparticles, and used them to probe nanoparticle transport and diffusion in cleavage-stage zebrafish embryos and to study their effects on embryonic development in real-time. We found that single Au nanoparticles (11.6 +/- 0.9 nm in diameter) passively diffused into the chorionic space of the embryos via their chorionic pore canals and continued their random-walk through chorionic space and into the inner mass of embryos. Diffusion coefficients of single nanoparticles vary dramatically (2.8 × 10-11 to 1.3 × 10-8 cm2 s-1) as nanoparticles diffuse through the various parts of embryos, suggesting highly diverse transport barriers and viscosity gradients in the embryos. The amount of Au nanoparticles accumulated in embryos increases with nanoparticle concentration increases. Interestingly, however, their effects on embryonic development are not proportionally related to their concentration. The majority of embryos (74% on average) chronically incubated with 0.025-1.2 nM Au nanoparticles for 120 h developed to normal zebrafish, with some (24%) being dead and few (2%) deformed. We have developed a new approach to image and characterize individual Au nanoparticles embedded in tissues using histology sample preparation methods and localized surface plasmon resonance spectra of single nanoparticles. We found Au nanoparticles in various parts of normally developed and deformed zebrafish, suggesting that the random-walk of nanoparticles in embryos during their development might have led to stochastic effects on embryonic development. These results show that Au nanoparticles are much more biocompatible with (less toxic to) the embryos than the Ag nanoparticles that we reported previously, suggesting that they are better suited as biocompatible probes for imaging embryos in vivo. The results provide powerful evidences that the
Pettersson, Per; Nordström, Jan; Doostan, Alireza
2016-02-01
We present a well-posed stochastic Galerkin formulation of the incompressible Navier–Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimate for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve further as more functionality is added in the future.
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve further as more functionality is added in the future.
NASA Astrophysics Data System (ADS)
Sun, Xu; Yang, Lina; Gao, Lianru; Zhang, Bing; Li, Shanshan; Li, Jun
2015-01-01
Center-oriented hyperspectral image clustering methods have been widely applied to hyperspectral remote sensing image processing; however, the drawbacks are obvious, including the over-simplicity of computing models and underutilized spatial information. In recent years, some studies have been conducted trying to improve this situation. We introduce the artificial bee colony (ABC) and Markov random field (MRF) algorithms to propose an ABC-MRF-cluster model to solve the problems mentioned above. In this model, a typical ABC algorithm framework is adopted in which cluster centers and iteration conditional model algorithm's results are considered as feasible solutions and objective functions separately, and MRF is modified to be capable of dealing with the clustering problem. Finally, four datasets and two indices are used to show that the application of ABC-cluster and ABC-MRF-cluster methods could help to obtain better image accuracy than conventional methods. Specifically, the ABC-cluster method is superior when used for a higher power of spectral discrimination, whereas the ABC-MRF-cluster method can provide better results when used for an adjusted random index. In experiments on simulated images with different signal-to-noise ratios, ABC-cluster and ABC-MRF-cluster showed good stability.
Geiger, D.; Girosi, F.
1989-05-01
In recent years many researchers have investigated the use of Markov random fields (MRFs) for computer vision. They can be applied for example in the output of the visual processes to reconstruct surfaces from sparse and noisy depth data, or to integrate early vision processes to label physical discontinuities. Drawbacks of MRFs models have been the computational complexity of the implementation and the difficulty in estimating the parameters of the model. This paper derives deterministic approximations to MRFs models. One of the considered models is shown to give in a natural way the graduate non convexity (GNC) algorithm. This model can be applied to smooth a field preserving its discontinuities. A new model is then proposed: it allows the gradient of the field to be enhanced at the discontinuities and smoothed elsewhere. All the theoretical results are obtained in the framework of the mean field theory, that is a well known statistical mechanics technique. A fast, parallel, and iterative algorithm to solve the deterministic equations of the two models is presented, together with experiments on synthetic and real images. The algorithm is applied to the problem of surface reconstruction is in the case of sparse data. A fast algorithm is also described that solves the problem of aligning the discontinuities of different visual models with intensity edges via integration.
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan
2010-01-01
For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.
NASA Astrophysics Data System (ADS)
Rajput, Sudheesh K.; Nishchal, Naveen K.
2017-04-01
We propose a novel security scheme based on the double random phase fractional domain encoding (DRPE) and modified Gerchberg-Saxton (G-S) phase retrieval algorithm for securing two images simultaneously. Any one of the images to be encrypted is converted into a phase-only image using modified G-S algorithm and this function is used as a key for encrypting another image. The original images are retrieved employing the concept of known-plaintext attack and following the DRPE decryption steps with all correct keys. The proposed scheme is also used for encryption of two color images with the help of convolution theorem and phase-truncated fractional Fourier transform. With some modification, the scheme is extended for simultaneous encryption of gray-scale and color images. As a proof-of-concept, simulation results have been presented for securing two gray-scale images, two color images, and simultaneous gray-scale and color images.
NASA Astrophysics Data System (ADS)
Lu, Jianfeng; Yang, Haizhao
2017-07-01
The particle-particle random phase approximation (pp-RPA) has been shown to be capable of describing double, Rydberg, and charge transfer excitations, for which the conventional time-dependent density functional theory (TDDFT) might not be suitable. It is thus desirable to reduce the computational cost of pp-RPA so that it can be efficiently applied to larger molecules and even solids. This paper introduces an O (N3) algorithm, where N is the number of orbitals, based on an interpolative separable density fitting technique and the Jacobi-Davidson eigensolver to calculate a few low-lying excitations in the pp-RPA framework. The size of the pp-RPA matrix can also be reduced by keeping only a small portion of orbitals with orbital energy close to the Fermi energy. This reduced system leads to a smaller prefactor of the cubic scaling algorithm, while keeping the accuracy for the low-lying excitation energies.
NASA Astrophysics Data System (ADS)
Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando
2017-06-01
Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.
Requena-Carrión, Jesús; Requena-Carrión, Víctor J
2016-04-01
In this paper, we develop an analytical approach to studying random patterns of activity in excitable cells. Our analytical approach uses a two-state stochastic model of excitable system based on the electrophysiological properties of refractoriness and restitution, which characterize cell recovery after excitation. By applying the notion of probability density flux, we derive the distributions of transition times between states and the distribution of interspike interval (ISI) durations for a constant applied stimulus. The derived ISI distribution is unimodal and, provided that the time spent in the excited state is constant, can be approximated by a Rayleigh peak followed by an exponential tail. We then explore the role of the model parameters in determining the shape of the derived distributions and the ISI coefficient of variation. Finally, we use our analytical results to study simulation results from the stochastic Morris-Lecar neuron and from a three-state extension of the proposed stochastic model, which is capable of reproducing multimodal ISI histograms.
NASA Astrophysics Data System (ADS)
Bedard-Hearn, Michael J.; Larsen, Ross E.; Schwartz, Benjamin J.
2005-12-01
The key factors that distinguish algorithms for nonadiabatic mixed quantum/classical (MQC) simulations from each other are how they incorporate quantum decoherence—the fact that classical nuclei must eventually cause a quantum superposition state to collapse into a pure state—and how they model the effects of decoherence on the quantum and classical subsystems. Most algorithms use distinct mechanisms for modeling nonadiabatic transitions between pure quantum basis states ("surface hops") and for calculating the loss of quantum-mechanical phase information (e.g., the decay of the off-diagonal elements of the density matrix). In our view, however, both processes should be unified in a single description of decoherence. In this paper, we start from the density matrix of the total system and use the frozen Gaussian approximation for the nuclear wave function to derive a nuclear-induced decoherence rate for the electronic degrees of freedom. We then use this decoherence rate as the basis for a new nonadiabatic MQC molecular-dynamics (MD) algorithm, which we call mean-field dynamics with stochastic decoherence (MF-SD). MF-SD begins by evolving the quantum subsystem according to the time-dependent Schrödinger equation, leading to mean-field dynamics. MF-SD then uses the nuclear-induced decoherence rate to determine stochastically at each time step whether the system remains in a coherent mixed state or decoheres. Once it is determined that the system should decohere, the quantum subsystem undergoes an instantaneous total wave-function collapse onto one of the adiabatic basis states and the classical velocities are adjusted to conserve energy. Thus, MF-SD combines surface hops and decoherence into a single idea: decoherence in MF-SD does not require the artificial introduction of reference states, auxiliary trajectories, or trajectory swarms, which also makes MF-SD much more computationally efficient than other nonadiabatic MQC MD algorithms. The unified definition of
A novel chaotic block image encryption algorithm based on dynamic random growth technique
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Liu, Lintao; Zhang, Yingqian
2015-03-01
This paper proposes a new block image encryption scheme based on hybrid chaotic maps and dynamic random growth technique. Since cat map is periodic and can be easily cracked by chosen plaintext attack, we use cat map in another securer way, which can completely eliminate the cyclical phenomenon and resist chosen plaintext attack. In the diffusion process, an intermediate parameter is calculated according to the image block. The intermediate parameter is used as the initial parameter of chaotic map to generate random data stream. In this way, the generated key streams are dependent on the plaintext image, which can resist the chosen plaintext attack. The experiment results prove that the proposed encryption algorithm is secure enough to be used in image transmission systems.
Random search algorithm for solving the nonlinear Fredholm integral equations of the second kind.
Hong, Zhimin; Yan, Zaizai; Yan, Jiao
2014-01-01
In this paper, a randomized numerical approach is used to obtain approximate solutions for a class of nonlinear Fredholm integral equations of the second kind. The proposed approach contains two steps: at first, we define a discretized form of the integral equation by quadrature formula methods and solution of this discretized form converges to the exact solution of the integral equation by considering some conditions on the kernel of the integral equation. And then we convert the problem to an optimal control problem by introducing an artificial control function. Following that, in the next step, solution of the discretized form is approximated by a kind of Monte Carlo (MC) random search algorithm. Finally, some examples are given to show the efficiency of the proposed approach.
A rigorous framework for multiscale simulation of stochastic cellular networks
Chevalier, Michael W.; El-Samad, Hana
2009-01-01
Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-cell variability even in clonal populations. Stochastic biochemical networks are modeled as continuous time discrete state Markov processes whose probability density functions evolve according to a chemical master equation (CME). The CME is not solvable but for the simplest cases, and one has to resort to kinetic Monte Carlo techniques to simulate the stochastic trajectories of the biochemical network under study. A commonly used such algorithm is the stochastic simulation algorithm (SSA). Because it tracks every biochemical reaction that occurs in a given system, the SSA presents computational difficulties especially when there is a vast disparity in the timescales of the reactions or in the number of molecules involved in these reactions. This is common in cellular networks, and many approximation algorithms have evolved to alleviate the computational burdens of the SSA. Here, we present a rigorously derived modified CME framework based on the partition of a biochemically reacting system into restricted and unrestricted reactions. Although this modified CME decomposition is as analytically difficult as the original CME, it can be naturally used to generate a hierarchy of approximations at different levels of accuracy. Most importantly, some previously derived algorithms are demonstrated to be limiting cases of our formulation. We apply our methods to biologically relevant test systems to demonstrate their accuracy and efficiency. PMID:19673546
Subspace dynamic mode decomposition for stochastic Koopman analysis
NASA Astrophysics Data System (ADS)
Takeishi, Naoya; Kawahara, Yoshinobu; Yairi, Takehisa
2017-09-01
The analysis of nonlinear dynamical systems based on the Koopman operator is attracting attention in various applications. Dynamic mode decomposition (DMD) is a data-driven algorithm for Koopman spectral analysis, and several variants with a wide range of applications have been proposed. However, popular implementations of DMD suffer from observation noise on random dynamical systems and generate inaccurate estimation of the spectra of the stochastic Koopman operator. In this paper, we propose subspace DMD as an algorithm for the Koopman analysis of random dynamical systems with observation noise. Subspace DMD first computes the orthogonal projection of future snapshots to the space of past snapshots and then estimates the spectra of a linear model, and its output converges to the spectra of the stochastic Koopman operator under standard assumptions. We investigate the empirical performance of subspace DMD with several dynamical systems and show its utility for the Koopman analysis of random dynamical systems.
Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm.
Bossard, Jeremy A; Lin, Lan; Werner, Douglas H
2016-01-01
Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as 'chaotic', but we propose that apparent 'chaotic' natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too 'perfect' to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the 'chaotic' (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and 'chaotic' superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime. © 2016 The Author(s).
Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm
Bossard, Jeremy A.; Lin, Lan; Werner, Douglas H.
2016-01-01
Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as ‘chaotic’, but we propose that apparent ‘chaotic’ natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too ‘perfect’ to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the ‘chaotic’ (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and ‘chaotic’ superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime. PMID:26763335
Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus
NASA Astrophysics Data System (ADS)
Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.
2014-09-01
There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is
Urban Road Detection in Airbone Laser Scanning Point Cloud Using Random Forest Algorithm
NASA Astrophysics Data System (ADS)
Kaczałek, B.; Borkowski, A.
2016-06-01
The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS). For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.
Fault diagnosis in spur gears based on genetic algorithm and random forest
NASA Astrophysics Data System (ADS)
Cerrada, Mariela; Zurita, Grover; Cabrera, Diego; Sánchez, René-Vinicio; Artés, Mariano; Li, Chuan
2016-03-01
There are growing demands for condition-based monitoring of gearboxes, and therefore new methods to improve the reliability, effectiveness, accuracy of the gear fault detection ought to be evaluated. Feature selection is still an important aspect in machine learning-based diagnosis in order to reach good performance of the diagnostic models. On the other hand, random forest classifiers are suitable models in industrial environments where large data-samples are not usually available for training such diagnostic models. The main aim of this research is to build up a robust system for the multi-class fault diagnosis in spur gears, by selecting the best set of condition parameters on time, frequency and time-frequency domains, which are extracted from vibration signals. The diagnostic system is performed by using genetic algorithms and a classifier based on random forest, in a supervised environment. The original set of condition parameters is reduced around 66% regarding the initial size by using genetic algorithms, and still get an acceptable classification precision over 97%. The approach is tested on real vibration signals by considering several fault classes, one of them being an incipient fault, under different running conditions of load and velocity.
Improved random-starting method for the EM algorithm for finite mixtures of regressions.
Schepers, Jan
2015-03-01
Two methods for generating random starting values for the expectation maximization (EM) algorithm are compared in terms of yielding maximum likelihood parameter estimates in finite mixtures of regressions. One of these methods is ubiquitous in applications of finite mixture regression, whereas the other method is an alternative that appears not to have been used so far. The two methods are compared in two simulation studies and on an illustrative data set. The results show that the alternative method yields solutions with likelihood values at least as high as, and often higher than, those returned by the standard method. Moreover, analyses of the illustrative data set show that the results obtained by the two methods may differ considerably with regard to some of the substantive conclusions. The results reported in this article indicate that in applications of finite mixture regression, consideration should be given to the type of mechanism chosen to generate random starting values for the EM algorithm. In order to facilitate the use of the proposed alternative method, an R function implementing the approach is provided in the Appendix of the article.
NASA Astrophysics Data System (ADS)
Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.
2013-05-01
In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.
Identifying and Analyzing Novel Epilepsy-Related Genes Using Random Walk with Restart Algorithm
Guo, Wei; Shang, Dong-Mei; Cao, Jing-Hui; Feng, Kaiyan; Wang, ShaoPeng
2017-01-01
As a pathological condition, epilepsy is caused by abnormal neuronal discharge in brain which will temporarily disrupt the cerebral functions. Epilepsy is a chronic disease which occurs in all ages and would seriously affect patients' personal lives. Thus, it is highly required to develop effective medicines or instruments to treat the disease. Identifying epilepsy-related genes is essential in order to understand and treat the disease because the corresponding proteins encoded by the epilepsy-related genes are candidates of the potential drug targets. In this study, a pioneering computational workflow was proposed to predict novel epilepsy-related genes using the random walk with restart (RWR) algorithm. As reported in the literature RWR algorithm often produces a number of false positive genes, and in this study a permutation test and functional association tests were implemented to filter the genes identified by RWR algorithm, which greatly reduce the number of suspected genes and result in only thirty-three novel epilepsy genes. Finally, these novel genes were analyzed based upon some recently published literatures. Our findings implicate that all novel genes were closely related to epilepsy. It is believed that the proposed workflow can also be applied to identify genes related to other diseases and deepen our understanding of the mechanisms of these diseases. PMID:28255556
Precise algorithm to generate random sequential addition of hard hyperspheres at saturation.
Zhang, G; Torquato, S
2013-11-01
The study of the packing of hard hyperspheres in d-dimensional Euclidean space R^{d} has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. E 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g(2)(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed "decorrelation" principle, and the degree of "hyperuniformity" (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the second moment of inertia of the average
An efficient voting algorithm for finding additive biclusters with random background.
Xiao, Jing; Wang, Lusheng; Liu, Xiaowen; Jiang, Tao
2008-12-01
The biclustering problem has been extensively studied in many areas, including e-commerce, data mining, machine learning, pattern recognition, statistics, and, more recently, computational biology. Given an n x m matrix A (n >or= m), the main goal of biclustering is to identify a subset of rows (called objects) and a subset of columns (called properties) such that some objective function that specifies the quality of the found bicluster (formed by the subsets of rows and of columns of A) is optimized. The problem has been proved or conjectured to be NP-hard for various objective functions. In this article, we study a probabilistic model for the implanted additive bicluster problem, where each element in the n x m background matrix is a random integer from [0, L - 1] for some integer L, and a k x k implanted additive bicluster is obtained from an error-free additive bicluster by randomly changing each element to a number in [0, L - 1] with probability theta. We propose an O(n(2)m) time algorithm based on voting to solve the problem. We show that when k >or= Omega(square root of (n log n)), the voting algorithm can correctly find the implanted bicluster with probability at least 1 - (9/n(2)). We also implement our algorithm as a C++ program named VOTE. The implementation incorporates several ideas for estimating the size of an implanted bicluster, adjusting the threshold in voting, dealing with small biclusters, and dealing with overlapping implanted biclusters. Our experimental results on both simulated and real datasets show that VOTE can find biclusters with a high accuracy and speed.
Zhang, Kui; Busov, Victor; Wei, Hairong
2017-01-01
Background Present knowledge indicates a multilayered hierarchical gene regulatory network (ML-hGRN) often operates above a biological pathway. Although the ML-hGRN is very important for understanding how a pathway is regulated, there is almost no computational algorithm for directly constructing ML-hGRNs. Results A backward elimination random forest (BWERF) algorithm was developed for constructing the ML-hGRN operating above a biological pathway. For each pathway gene, the BWERF used a random forest model to calculate the importance values of all transcription factors (TFs) to this pathway gene recursively with a portion (e.g. 1/10) of least important TFs being excluded in each round of modeling, during which, the importance values of all TFs to the pathway gene were updated and ranked until only one TF was remained in the list. The above procedure, termed BWERF. After that, the importance values of a TF to all pathway genes were aggregated and fitted to a Gaussian mixture model to determine the TF retention for the regulatory layer immediately above the pathway layer. The acquired TFs at the secondary layer were then set to be the new bottom layer to infer the next upper layer, and this process was repeated until a ML-hGRN with the expected layers was obtained. Conclusions BWERF improved the accuracy for constructing ML-hGRNs because it used backward elimination to exclude the noise genes, and aggregated the individual importance values for determining the TFs retention. We validated the BWERF by using it for constructing ML-hGRNs operating above mouse pluripotency maintenance pathway and Arabidopsis lignocellulosic pathway. Compared to GENIE3, BWERF showed an improvement in recognizing authentic TFs regulating a pathway. Compared to the bottom-up Gaussian graphical model algorithm we developed for constructing ML-hGRNs, the BWERF can construct ML-hGRNs with significantly reduced edges that enable biologists to choose the implicit edges for experimental
NASA Astrophysics Data System (ADS)
Cocco, S.; Monasson, R.
2001-08-01
The computational complexity of solving random 3-Satisfiability (3-SAT) problems is investigated using statistical physics concepts and techniques related to phase transitions, growth processes and (real-space) renormalization flows. 3-SAT is a representative example of hard computational tasks; it consists in knowing whether a set of αN randomly drawn logical constraints involving N Boolean variables can be satisfied altogether or not. Widely used solving procedures, as the Davis-Putnam-Loveland-Logemann (DPLL) algorithm, perform a systematic search for a solution, through a sequence of trials and errors represented by a search tree. The size of the search tree accounts for the computational complexity, i.e. the amount of computational efforts, required to achieve resolution. In the present study, we identify, using theory and numerical experiments, easy (size of the search tree scaling polynomially with N) and hard (exponential scaling) regimes as a function of the ratio α of constraints per variable. The typical complexity is explicitly calculated in the different regimes, in very good agreement with numerical simulations. Our theoretical approach is based on the analysis of the growth of the branches in the search tree under the operation of DPLL. On each branch, the initial 3-SAT problem is dynamically turned into a more generic 2+p-SAT problem, where p and 1 - p are the fractions of constraints involving three and two variables respectively. The growth of each branch is monitored by the dynamical evolution of α and p and is represented by a trajectory in the static phase diagram of the random 2+p-SAT problem. Depending on whether or not the trajectories cross the boundary between satisfiable and unsatisfiable phases, single branches or full trees are generated by DPLL, resulting in easy or hard resolutions. Our picture for the origin of complexity can be applied to other computational problems solved by branch and bound algorithms.
Marmolino, Ciro
2011-10-15
The paper describes the occurrence of stochastic heating of dust particles in dusty plasmas as an energy instability due to the correlations between dust grain charge and electric field fluctuations. The possibility that the mean energy (''temperature'') of dust particles can grow in time has been found both from the self-consistent kinetic description of dusty plasmas taking into account charge fluctuations [U. de Angelis, A. V. Ivlev, V. N. Tsytovich, and G. E. Morfill, Phys. Plasmas 12(5), 052301 (2005)] and from a Fokker-Planck approach to systems with variable charge [A. V. Ivlev, S. K. Zhdanov, B. A. Klumov, and G. E. Morfill, Phys. Plasmas 12(9), 092104 (2005)]. Here, a different derivation is given by using the mathematical techniques of the so called multiplicative stochastic differential equations. Both cases of ''fast'' and ''slow'' fluctuations are discussed.
Combined fuzzy logic and random walker algorithm for PET image tumor delineation.
Soufi, Motahare; Kamali-Asl, Alireza; Geramifar, Parham; Abdoli, Mehrsima; Rahmim, Arman
2016-02-01
The random walk (RW) technique serves as a powerful tool for PET tumor delineation, which typically involves significant noise and/or blurring. One challenging step is hard decision-making in pixel labeling. Fuzzy logic techniques have achieved increasing application in edge detection. We aimed to combine the advantages of fuzzy edge detection with the RW technique to improve PET tumor delineation. A fuzzy inference system was designed for tumor edge detection from RW probabilities. Three clinical PET/computed tomography datasets containing 12 liver, 13 lung, and 18 abdomen tumors were analyzed, with manual expert tumor contouring as ground truth. The standard RW and proposed combined method were compared quantitatively using the dice similarity coefficient, the Hausdorff distance, and the mean standard uptake value. The dice similarity coefficient of the proposed method versus standard RW showed significant mean improvements of 21.0±7.2, 12.3±5.8, and 18.4%±6.1% for liver, lung, and abdominal tumors, respectively, whereas the mean improvements in the Hausdorff distance were 3.6±1.4, 1.3±0.4, 1.8±0.8 mm, and the mean improvements in SUVmean error were 15.5±6.3, 11.7±8.6, and 14.1±6.8% (all P's<0.001). For all tumor sizes, the proposed method outperformed the RW algorithm. Furthermore, tumor edge analysis demonstrated further enhancement of the performance of the algorithm, relative to the RW method, with decreasing edge gradients. The proposed technique improves PET lesion delineation at different tumor sites. It depicts greater effectiveness in tumors with smaller size and/or low edge gradients, wherein most PET segmentation algorithms encounter serious challenges. Favorable execution time and accurate performance of the algorithm make it a great tool for clinical applications.
Baik, Jonathan; Ye, Qian; Zhang, Lewei; Poh, Catherine; Rosin, Miriam; MacAulay, Calum; Guillaud, Martial
2014-06-01
A major challenge for the early diagnosis of oral cancer is the ability to differentiate oral premalignant lesions (OPL) at high risk of progressing into invasive squamous cell carcinoma (SCC) from those at low risk. Our group has previously used high-resolution image analysis algorithms to quantify the nuclear phenotypic changes occurring in OPLs. This approach, however, requires a manual selection of nuclei images. Here, we investigated a new, semi-automated algorithm to identify OPLs at high risk of progressing into invasive SCC from those at low risk using Random Forests, a tree-based ensemble classifier. We trained a sequence of classifiers using morphometric data calculated on nuclei from 29 normal, 5 carcinoma in situ (CIS) and 28 SCC specimens. After automated discrimination of nuclei from other objects (i.e., debris, clusters, etc.), a nuclei classifier was trained to discriminate abnormal nuclei (8,841) from normal nuclei (5,762). We extracted voting scores from this trained classifier and created an automated nuclear phenotypic score (aNPS) to identify OPLs at high risk of progression. The new algorithm showed a correct classification rate of 80% (80.6% sensitivity, 79.3% specificity) at the cellular level for the test set, and a correct classification rate of 75% (77.8% sensitivity, 71.4% specificity) at the tissue level with a negative predictive value of 76% and a positive predictive value of 74% for predicting progression among 71 OPLs, performed on par with the manual method in our previous study. We conclude that the newly developed aNPS algorithm serves as a crucial asset in the implementation of high-resolution image analysis in routine clinical pathology practice to identify lesions that require molecular evaluation or more frequent follow-up.
Randomized selection on the GPU
Monroe, Laura Marie; Wendelberger, Joanne R; Michalak, Sarah E
2011-01-13
We implement here a fast and memory-sparing probabilistic top N selection algorithm on the GPU. To our knowledge, this is the first direct selection in the literature for the GPU. The algorithm proceeds via a probabilistic-guess-and-chcck process searching for the Nth element. It always gives a correct result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces the average time required for the algorithm. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be well suited to more general parallel processors with limited amounts of fast memory.
Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks
Rudinger, Kenneth; Gamble, John King; Bach, Eric; ...
2013-07-01
Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less
Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks
Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.
2013-07-01
Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erences in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.
NASA Astrophysics Data System (ADS)
Shao, Zhenfeng; Zhang, Yuan; Zhang, Lei; Song, Yang; Peng, Minjun
2016-06-01
Impervious surface area (ISA) is one of the most important indicators of urban environments. At present, based on multi-resolution remote sensing images, numerous approaches have been proposed to extract impervious surface, using statistical estimation, sub-pixel classification and spectral mixture analysis method of sub-pixel analysis. Through these methods, impervious surfaces can be effectively applied to regional-scale planning and management. However, for the large scale region, high resolution remote sensing images can provide more details, and therefore they will be more conducive to analysis environmental monitoring and urban management. Since the purpose of this study is to map impervious surfaces more effectively, three classification algorithms (random forests, decision trees, and artificial neural networks) were tested for their ability to map impervious surface. Random forests outperformed the decision trees, and artificial neural networks in precision. Combining the spectral indices and texture, random forests is applied to impervious surface extraction with a producer's accuracy of 0.98, a user's accuracy of 0.97, and an overall accuracy of 0.98 and a kappa coefficient of 0.97.
Ersek, Mary; Polissar, Nayak; Du Pen, Anna; Jablonski, Anita; Herr, Keela; Neradilek, Moni B
2015-01-01
Background Unrelieved pain among nursing home (NH) residents is a well-documented problem. Attempts have been made to enhance pain management for older adults, including those in NHs. Several evidence-based clinical guidelines have been published to assist providers in assessing and managing acute and chronic pain in older adults. Despite the proliferation and dissemination of these practice guidelines, research has shown that intensive systems-level implementation strategies are necessary to change clinical practice and patient outcomes within a health-care setting. One promising approach is the embedding of guidelines into explicit protocols and algorithms to enhance decision making. Purpose The goal of the article is to describe several issues that arose in the design and conduct of a study that compared the effectiveness of pain management algorithms coupled with a comprehensive adoption program versus the effectiveness of education alone in improving evidence-based pain assessment and management practices, decreasing pain and depressive symptoms, and enhancing mobility among NH residents. Methods The study used a cluster-randomized controlled trial (RCT) design in which the individual NH was the unit of randomization. The Roger's Diffusion of Innovations theory provided the framework for the intervention. Outcome measures were surrogate-reported usual pain, self-reported usual and worst pain, and self-reported pain-related interference with activities, depression, and mobility. Results The final sample consisted of 485 NH residents from 27 NHs. The investigators were able to use a staggered enrollment strategy to recruit and retain facilities. The adaptive randomization procedures were successful in balancing intervention and control sites on key NH characteristics. Several strategies were successfully implemented to enhance the adoption of the algorithm. Limitations/Lessons The investigators encountered several methodological challenges that were inherent to
NASA Astrophysics Data System (ADS)
Didari, Azadeh; Pinar Mengüç, M.
2017-08-01
Advances in nanotechnology and nanophotonics are inextricably linked with the need for reliable computational algorithms to be adapted as design tools for the development of new concepts in energy harvesting, radiative cooling, nanolithography and nano-scale manufacturing, among others. In this paper, we provide an outline for such a computational tool, named NF-RT-FDTD, to determine the near-field radiative transfer between structured surfaces using Finite Difference Time Domain method. NF-RT-FDTD is a direct and non-stochastic algorithm, which accounts for the statistical nature of the thermal radiation and is easily applicable to any arbitrary geometry at thermal equilibrium. We present a review of the fundamental relations for far- and near-field radiative transfer between different geometries with nano-scale surface and volumetric features and gaps, and then we discuss the details of the NF-RT-FDTD formulation, its application to sample geometries and outline its future expansion to more complex geometries. In addition, we briefly discuss some of the recent numerical works for direct and indirect calculations of near-field thermal radiation transfer, including Scattering Matrix method, Finite Difference Time Domain method (FDTD), Wiener Chaos Expansion, Fluctuating Surface Current (FSC), Fluctuating Volume Current (FVC) and Thermal Discrete Dipole Approximations (TDDA).
Markov stochasticity coordinates
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-01-01
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method-termed Markov Stochasticity Coordinates-is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Stochastic longshore current dynamics
NASA Astrophysics Data System (ADS)
Restrepo, Juan M.; Venkataramani, Shankar
2016-12-01
We develop a stochastic parametrization, based on a 'simple' deterministic model for the dynamics of steady longshore currents, that produces ensembles that are statistically consistent with field observations of these currents. Unlike deterministic models, stochastic parameterization incorporates randomness and hence can only match the observations in a statistical sense. Unlike statistical emulators, in which the model is tuned to the statistical structure of the observation, stochastic parametrization are not directly tuned to match the statistics of the observations. Rather, stochastic parameterization combines deterministic, i.e physics based models with stochastic models for the "missing physics" to create hybrid models, that are stochastic, but yet can be used for making predictions, especially in the context of data assimilation. We introduce a novel measure of the utility of stochastic models of complex processes, that we call consistency of sensitivity. A model with poor consistency of sensitivity requires a great deal of tuning of parameters and has a very narrow range of realistic parameters leading to outcomes consistent with a reasonable spectrum of physical outcomes. We apply this metric to our stochastic parametrization and show that, the loss of certainty inherent in model due to its stochastic nature is offset by the model's resulting consistency of sensitivity. In particular, the stochastic model still retains the forward sensitivity of the deterministic model and hence respects important structural/physical constraints, yet has a broader range of parameters capable of producing outcomes consistent with the field data used in evaluating the model. This leads to an expanded range of model applicability. We show, in the context of data assimilation, the stochastic parametrization of longshore currents achieves good results in capturing the statistics of observation that were not used in tuning the model.
Simulation of Anderson localization in a random fiber using a fast Fresnel diffraction algorithm
NASA Astrophysics Data System (ADS)
Davis, Jeffrey A.; Cottrell, Don M.
2016-06-01
Anderson localization has been previously demonstrated both theoretically and experimentally for transmission of a Gaussian beam through long distances in an optical fiber consisting of a random array of smaller fibers, each having either a higher or lower refractive index. However, the computational times were extremely long. We show how to simulate these results using a fast Fresnel diffraction algorithm. In each iteration of this approach, the light passes through a phase mask, undergoes Fresnel diffraction over a small distance, and then passes through the same phase mask. We also show results where we use a binary amplitude mask at the input that selectively illuminates either the higher or the lower index fibers. Additionally, we examine imaging of various sized objects through these fibers. In all cases, our results are consistent with other computational methods and experimental results, but with a much reduced computational time.
Fast conical surface evaluation via randomized algorithm in the null-screen test
NASA Astrophysics Data System (ADS)
Aguirre-Aguirre, D.; Díaz-Uribe, R.; Villalobos-Mendoza, B.
2017-01-01
This work shows a method to recover the shape of the surface via randomized algorithms when the null-screen test is used, instead of the integration process that is commonly performed. This, because the majority of the errors are added during the reconstruction of the surface (or the integration process). This kind of large surfaces are widely used in the aerospace sector and industry in general, and a big problem exists when these surfaces have to be tested. The null-screen method is a low-cost test, and a complete surface analysis can be done by using this method. In this paper, we show the simulations done for the analysis of fast conic surfaces, where it was proved that the quality and shape of a surface under study can be recovered with a percentage error < 2.
Security enhancement of double-random phase encryption by iterative algorithm
NASA Astrophysics Data System (ADS)
Qian, Sheng-Xia; Li, Yongnan; Kong, Ling-Jun; Li, Si-Min; Ren, Zhi-Cheng; Tu, Chenghou; Wang, Hui-Tian
2014-08-01
We propose an approach to enhance the security of optical encryption based on double-random phase encryption in a 4f system. The phase key in the input plane of the 4f system is generated by the Yang-Gu algorithm to control the phase of the encrypted information in the output plane of the 4f system, until the phase in the output plane converges to a predesigned distribution. Only the amplitude of the encrypted information must be recorded as a ciphertext. The information, which needs to be transmitted, is greatly reduced. We can decrypt the ciphertext with the aid of the predesigned phase distribution and the phase key in the Fourier plane. Our approach can resist various attacks.
An improved random walk algorithm for the implicit Monte Carlo method
Keady, Kendra P. Cleveland, Mathew A.
2017-01-01
In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in “fully-gray” form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities are a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2–4 compared to standard RW, and a factor of ∼3–6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.
SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation
Yao, W; Farr, J
2015-06-15
Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations.
An improved random walk algorithm for the implicit Monte Carlo method
NASA Astrophysics Data System (ADS)
Keady, Kendra P.; Cleveland, Mathew A.
2017-01-01
In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in "fully-gray" form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities are a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2-4 compared to standard RW, and a factor of ∼3-6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
NASA Astrophysics Data System (ADS)
Bodin, Jacques
2015-03-01
In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.
On implementation of EM-type algorithms in the stochastic models for a matrix computing on GPU
Gorshenin, Andrey K.
2015-03-10
The paper discusses the main ideas of an implementation of EM-type algorithms for computing on the graphics processors and the application for the probabilistic models based on the Cox processes. An example of the GPU’s adapted MATLAB source code for the finite normal mixtures with the expectation-maximization matrix formulas is given. The testing of computational efficiency for GPU vs CPU is illustrated for the different sample sizes.
Application of random number generators in genetic algorithms to improve rainfall-runoff modelling
NASA Astrophysics Data System (ADS)
Chlumecký, Martin; Buchtele, Josef; Richta, Karel
2017-10-01
The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.
Paul, Desbordes; Su, Ruan; Romain, Modzelewski; Sébastien, Vauclin; Pierre, Vera; Isabelle, Gardin
2016-12-28
The outcome prediction of patients can greatly help to personalize cancer treatment. A large amount of quantitative features (clinical exams, imaging, …) are potentially useful to assess the patient outcome. The challenge is to choose the most predictive subset of features. In this paper, we propose a new feature selection strategy called GARF (genetic algorithm based on random forest) extracted from positron emission tomography (PET) images and clinical data. The most relevant features, predictive of the therapeutic response or which are prognoses of the patient survival 3 years after the end of treatment, were selected using GARF on a cohort of 65 patients with a local advanced oesophageal cancer eligible for chemo-radiation therapy. The most relevant predictive results were obtained with a subset of 9 features leading to a random forest misclassification rate of 18±4% and an areas under the of receiver operating characteristic (ROC) curves (AUC) of 0.823±0.032. The most relevant prognostic results were obtained with 8 features leading to an error rate of 20±7% and an AUC of 0.750±0.108. Both predictive and prognostic results show better performances using GARF than using 4 other studied methods.
Harmonics elimination algorithm for operational modal analysis using random decrement technique
NASA Astrophysics Data System (ADS)
Modak, S. V.; Rawal, Chetan; Kundra, T. K.
2010-05-01
Operational modal analysis (OMA) extracts modal parameters of a structure using their output response, during operation in general. OMA, when applied to mechanical engineering structures is often faced with the problem of harmonics present in the output response, and can cause erroneous modal extraction. This paper demonstrates for the first time that the random decrement (RD) method can be efficiently employed to eliminate the harmonics from the randomdec signatures. Further, the research work shows effective elimination of large amplitude harmonics also by proposing inclusion of additional random excitation. This obviously need not be recorded for analysis, as is the case with any other OMA method. The free decays obtained from RD have been used for system modal identification using eigen-system realization algorithm (ERA). The proposed harmonic elimination method has an advantage over previous methods in that it does not require the harmonic frequencies to be known and can be used for multiple harmonics, including periodic signals. The theory behind harmonic elimination is first developed and validated. The effectiveness of the method is demonstrated through a simulated study and then by experimental studies on a beam and a more complex F-shape structure, which resembles in shape to the skeleton of a drilling or milling machine tool. Cases with presence of single and multiple harmonics in the response are considered.
Algorithms for propagating uncertainty across heterogeneous domains
Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.
2015-12-30
We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.
Stochastic models of solute transport in highly heterogeneous geologic media
Semenov, V.N.; Korotkin, I.A.; Pruess, K.; Goloviznin, V.M.; Sorokovikova, O.S.
2009-09-15
A stochastic model of anomalous diffusion was developed in which transport occurs by random motion of Brownian particles, described by distribution functions of random displacements with heavy (power-law) tails. One variant of an effective algorithm for random function generation with a power-law asymptotic and arbitrary factor of asymmetry is proposed that is based on the Gnedenko-Levy limit theorem and makes it possible to reproduce all known Levy {alpha}-stable fractal processes. A two-dimensional stochastic random walk algorithm has been developed that approximates anomalous diffusion with streamline-dependent and space-dependent parameters. The motivation for introducing such a type of dispersion model is the observed fact that tracers in natural aquifers spread at different super-Fickian rates in different directions. For this and other important cases, stochastic random walk models are the only known way to solve the so-called multiscaling fractional order diffusion equation with space-dependent parameters. Some comparisons of model results and field experiments are presented.
Hoshino, Tatsuhiko; Inagaki, Fumio
2017-01-01
Next-generation sequencing (NGS) is a powerful tool for analyzing environmental DNA and provides the comprehensive molecular view of microbial communities. For obtaining the copy number of particular sequences in the NGS library, however, additional quantitative analysis as quantitative PCR (qPCR) or digital PCR (dPCR) is required. Furthermore, number of sequences in a sequence library does not always reflect the original copy number of a target gene because of biases caused by PCR amplification, making it difficult to convert the proportion of particular sequences in the NGS library to the copy number using the mass of input DNA. To address this issue, we applied stochastic labeling approach with random-tag sequences and developed a NGS-based quantification protocol, which enables simultaneous sequencing and quantification of the targeted DNA. This quantitative sequencing (qSeq) is initiated from single-primer extension (SPE) using a primer with random tag adjacent to the 5’ end of target-specific sequence. During SPE, each DNA molecule is stochastically labeled with the random tag. Subsequently, first-round PCR is conducted, specifically targeting the SPE product, followed by second-round PCR to index for NGS. The number of random tags is only determined during the SPE step and is therefore not affected by the two rounds of PCR that may introduce amplification biases. In the case of 16S rRNA genes, after NGS sequencing and taxonomic classification, the absolute number of target phylotypes 16S rRNA gene can be estimated by Poisson statistics by counting random tags incorporated at the end of sequence. To test the feasibility of this approach, the 16S rRNA gene of Sulfolobus tokodaii was subjected to qSeq, which resulted in accurate quantification of 5.0 × 103 to 5.0 × 104 copies of the 16S rRNA gene. Furthermore, qSeq was applied to mock microbial communities and environmental samples, and the results were comparable to those obtained using digital PCR and
Hoshino, Tatsuhiko; Inagaki, Fumio
2017-01-01
Next-generation sequencing (NGS) is a powerful tool for analyzing environmental DNA and provides the comprehensive molecular view of microbial communities. For obtaining the copy number of particular sequences in the NGS library, however, additional quantitative analysis as quantitative PCR (qPCR) or digital PCR (dPCR) is required. Furthermore, number of sequences in a sequence library does not always reflect the original copy number of a target gene because of biases caused by PCR amplification, making it difficult to convert the proportion of particular sequences in the NGS library to the copy number using the mass of input DNA. To address this issue, we applied stochastic labeling approach with random-tag sequences and developed a NGS-based quantification protocol, which enables simultaneous sequencing and quantification of the targeted DNA. This quantitative sequencing (qSeq) is initiated from single-primer extension (SPE) using a primer with random tag adjacent to the 5' end of target-specific sequence. During SPE, each DNA molecule is stochastically labeled with the random tag. Subsequently, first-round PCR is conducted, specifically targeting the SPE product, followed by second-round PCR to index for NGS. The number of random tags is only determined during the SPE step and is therefore not affected by the two rounds of PCR that may introduce amplification biases. In the case of 16S rRNA genes, after NGS sequencing and taxonomic classification, the absolute number of target phylotypes 16S rRNA gene can be estimated by Poisson statistics by counting random tags incorporated at the end of sequence. To test the feasibility of this approach, the 16S rRNA gene of Sulfolobus tokodaii was subjected to qSeq, which resulted in accurate quantification of 5.0 × 103 to 5.0 × 104 copies of the 16S rRNA gene. Furthermore, qSeq was applied to mock microbial communities and environmental samples, and the results were comparable to those obtained using digital PCR and
NASA Astrophysics Data System (ADS)
Zhou, Shengfan
2017-08-01
We first establish some sufficient conditions for constructing a random exponential attractor for a continuous cocycle on a separable Banach space and weighted spaces of infinite sequences. Then we apply our abstract result to study the existence of random exponential attractors for non-autonomous first order dissipative lattice dynamical systems with multiplicative white noise.
Fulger, Daniel; Scalas, Enrico; Germano, Guido
2008-02-01
We present a numerical method for the Monte Carlo simulation of uncoupled continuous-time random walks with a Lévy alpha -stable distribution of jumps in space and a Mittag-Leffler distribution of waiting times, and apply it to the stochastic solution of the Cauchy problem for a partial differential equation with fractional derivatives both in space and in time. The one-parameter Mittag-Leffler function is the natural survival probability leading to time-fractional diffusion equations. Transformation methods for Mittag-Leffler random variables were found later than the well-known transformation method by Chambers, Mallows, and Stuck for Lévy alpha -stable random variables and so far have not received as much attention; nor have they been used together with the latter in spite of their mathematical relationship due to the geometric stability of the Mittag-Leffler distribution. Combining the two methods, we obtain an accurate approximation of space- and time-fractional diffusion processes almost as easy and fast to compute as for standard diffusion processes.
Soufi, M; Asl, A Kamali; Geramifar, P
2015-06-15
Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and
Mehrotra, Sanjay
2016-09-07
The support from this grant resulted in seven published papers and a technical report. Two papers are published in SIAM J. on Optimization [87, 88]; two papers are published in IEEE Transactions on Power Systems [77, 78]; one paper is published in Smart Grid [79]; one paper is published in Computational Optimization and Applications [44] and one in INFORMS J. on Computing [67]). The works in [44, 67, 87, 88] were funded primarily by this DOE grant. The applied papers in [77, 78, 79] were also supported through a subcontract from the Argonne National Lab. We start by presenting our main research results on the scenario generation problem in Sections 1–2. We present our algorithmic results on interior point methods for convex optimization problems in Section 3. We describe a new ‘central’ cutting surface algorithm developed for solving large scale convex programming problems (as is the case with our proposed research) with semi-infinite number of constraints in Section 4. In Sections 5–6 we present our work on two application problems of interest to DOE.
Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.; ...
2016-04-15
Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less
Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.; Castaing, Jeremy
2016-04-15
Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as a foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.
NASA Astrophysics Data System (ADS)
Godinho, Sérgio; Guiomar, Nuno; Gil, Artur
2016-07-01
This study aims to develop and propose a methodological approach for montado ecosystem mapping using Landsat 8 multi-spectral data, vegetation indices, and the Stochastic Gradient Boosting (SGB) algorithm. Two Landsat 8 scenes (images from spring and summer 2014) of the same area in southern Portugal were acquired. Six vegetation indices were calculated for each scene: the Enhanced Vegetation Index (EVI), the Short-Wave Infrared Ratio (SWIR32), the Carotenoid Reflectance Index 1 (CRI1), the Green Chlorophyll Index (CIgreen), the Normalised Multi-band Drought Index (NMDI), and the Soil-Adjusted Total Vegetation Index (SATVI). Based on this information, two datasets were prepared: (i) Dataset I only included multi-temporal Landsat 8 spectral bands (LS8), and (ii) Dataset II included the same information as Dataset I plus vegetation indices (LS8 + VIs). The integration of the vegetation indices into the classification scheme resulted in a significant improvement in the accuracy of Dataset II's classifications when compared to Dataset I (McNemar test: Z-value = 4.50), leading to a difference of 4.90% in overall accuracy and 0.06 in the Kappa value. For the montado ecosystem, adding vegetation indices in the classification process showed a relevant increment in producer and user accuracies of 3.64% and 6.26%, respectively. By using the variable importance function from the SGB algorithm, it was found that the six most prominent variables (from a total of 24 tested variables) were the following: EVI_summer; CRI1_spring; SWIR32_spring; B6_summer; B5_summer; and CIgreen_summer.
Yang, Yang; Peng, Degao; Lu, Jianfeng; Yang, Weitao
2014-09-28
The particle-particle random phase approximation (pp-RPA) has been used to investigate excitation problems in our recent paper [Y. Yang, H. van Aggelen, and W. Yang, J. Chem. Phys. 139, 224105 (2013)]. It has been shown to be capable of describing double, Rydberg, and charge transfer excitations, which are challenging for conventional time-dependent density functional theory (TDDFT). However, its performance on larger molecules is unknown as a result of its expensive O(N(6)) scaling. In this article, we derive and implement a Davidson iterative algorithm for the pp-RPA to calculate the lowest few excitations for large systems. The formal scaling is reduced to O(N(4)), which is comparable with the commonly used configuration interaction singles (CIS) and TDDFT methods. With this iterative algorithm, we carried out benchmark tests on molecules that are significantly larger than the molecules in our previous paper with a reasonably large basis set. Despite some self-consistent field convergence problems with ground state calculations of (N - 2)-electron systems, we are able to accurately capture lowest few excitations for systems with converged calculations. Compared to CIS and TDDFT, there is no systematic bias for the pp-RPA with the mean signed error close to zero. The mean absolute error of pp-RPA with B3LYP or PBE references is similar to that of TDDFT, which suggests that the pp-RPA is a comparable method to TDDFT for large molecules. Moreover, excitations with relatively large non-HOMO excitation contributions are also well described in terms of excitation energies, as long as there is also a relatively large HOMO excitation contribution. These findings, in conjunction with the capability of pp-RPA for describing challenging excitations shown earlier, further demonstrate the potential of pp-RPA as a reliable and general method to describe excitations, and to be a good alternative to TDDFT methods.
Pinsker, Jordan E.; Lee, Joon Bok; Dassau, Eyal; Seborg, Dale E.; Bradley, Paige K.; Gondhalekar, Ravi; Bevier, Wendy C.; Huyett, Lauren; Zisser, Howard C.
2016-01-01
OBJECTIVE To evaluate two widely used control algorithms for an artificial pancreas (AP) under nonideal but comparable clinical conditions. RESEARCH DESIGN AND METHODS After a pilot safety and feasibility study (n = 10), closed-loop control (CLC) was evaluated in a randomized, crossover trial of 20 additional adults with type 1 diabetes. Personalized model predictive control (MPC) and proportional integral derivative (PID) algorithms were compared in supervised 27.5-h CLC sessions. Challenges included overnight control after a 65-g dinner, response to a 50-g breakfast, and response to an unannounced 65-g lunch. Boluses of announced dinner and breakfast meals were given at mealtime. The primary outcome was time in glucose range 70–180 mg/dL. RESULTS Mean time in range 70–180 mg/dL was greater for MPC than for PID (74.4 vs. 63.7%, P = 0.020). Mean glucose was also lower for MPC than PID during the entire trial duration (138 vs. 160 mg/dL, P = 0.012) and 5 h after the unannounced 65-g meal (181 vs. 220 mg/dL, P = 0.019). There was no significant difference in time with glucose <70 mg/dL throughout the trial period. CONCLUSIONS This first comprehensive study to compare MPC and PID control for the AP indicates that MPC performed particularly well, achieving nearly 75% time in the target range, including the unannounced meal. Although both forms of CLC provided safe and effective glucose management, MPC performed as well or better than PID in all metrics. PMID:27289127
Chen, YiMing; Cao, Wei; Gao, XianChao; Ong, HuiShan; Ji, Tong
2015-06-09
Head and Neck Squamous Cell Carcinoma (HNSCC) has a high incidence in elderly patients. The postoperative complications present great challenges within treatment and they're hard for early warning. Data from 525 patients diagnosed with HNSCC including a training set (n = 513) and an external testing set (n = 12) in our institution between 2006 and 2011 was collected. Variables involved are general demographic characteristics, complications, disease and treatment given. Five data mining algorithms were firstly exploited to construct predictive models in the training set. Subsequently, cross-validation was used to compare the different performance of these models and the best data mining algorithm model was then selected to perform the prediction in an external testing set. Data from 513 patients (age > 60 y) with HNSCC in a training set was included while 44 variables were selected (P < 0.05). Five predictive models were constructed; the model with 44 variables based on the Random Forest algorithm demonstrated the best accuracy (89.084%) and the best AUC value (0.949). In an external testing set, the accuracy (83.333%) and the AUC value (0.781) were obtained by using the random forest algorithm model. Data mining should be a promising approach used for elderly patients with HNSCC to predict the probability of postoperative complications. Our results highlighted the potential of computational prediction of postoperative complications in elderly patients with HNSCC by using the random forest algorithm model.
Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.
2011-09-01
The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-02-08
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
Pre-Hospital Triage of Trauma Patients Using the Random Forest Computer Algorithm
Scerbo, Michelle; Radhakrishnan, Hari; Cotton, Bryan; Dua, Anahita; Del Junco, Deborah; Wade, Charles; Holcomb, John B.
2015-01-01
Background Over-triage not only wastes resources but displaces the patient from their community and causes delay of treatment for the more seriously injured. This study aimed to validate the Random Forest computer model (RFM) as means of better triaging trauma patients to Level I trauma centers. Methods Adult trauma patients with “medium activation” presenting via helicopter to a Level I Trauma Center from May 2007 to May 2009 were included. The “medium activation” trauma patient is alert and hemodynamically stable on scene but has either subnormal vital signs or an accumulation of risk factors that may indicate a potentially serious injury. Variables included in the RFM computer analysis including demographics, mechanism of injury, pre-hospital fluid, medications, vitals, and disposition. Statistical analysis was performed via the Random Forest Algorithm to compare our institutional triage rate to rates determined by the RFM. Results A total of 1,653 patients were included in this study of which 496 were used in the testing set of the RFM. In our testing set, 33.8% of patients brought to our Level I trauma center could have been managed at a Level III trauma center and 88% of patients that required a Level I trauma center were identified correctly. In the testing set, there was an over-triage rate of 66% while utilizing the RFM we decreased the over-triage rate to 42% (p<0.001). There was an under-triage rate of 8.3%. The RFM predicted patient disposition with a sensitivity of 89%, specificity of 42%, negative predictive value of 92% and positive predictive value of 34%. Conclusion While prospective validation is required, it appears that computer modeling potentially could be used to guide triage decisions, allowing both more accurate triage and more efficient use of the trauma system. PMID:24484906
Prehospital triage of trauma patients using the Random Forest computer algorithm.
Scerbo, Michelle; Radhakrishnan, Hari; Cotton, Bryan; Dua, Anahita; Del Junco, Deborah; Wade, Charles; Holcomb, John B
2014-04-01
Overtriage not only wastes resources but also displaces the patient from their community and causes delay of treatment for the more seriously injured. This study aimed to validate the Random Forest computer model (RFM) as means of better triaging trauma patients to level 1 trauma centers. Adult trauma patients with "medium activation" presenting via helicopter to a level 1 trauma center from May 2007 to May 2009 were included. The "medium activation" trauma patient is alert and hemodynamically stable on scene but has either subnormal vital signs or accumulation of risk factors that may indicate a potentially serious injury. Variables included in the RFM analysis were demographics, mechanism of injury, prehospital fluid, medications, vitals, and disposition. Statistical analysis was performed via the Random Forest algorithm to compare our institutional triage rate to rates determined by the RFM. A total of 1653 patients were included in this study, of which 496 were used in the testing set of the RFM. In our testing set, 33.8% of patients brought to our level 1 trauma center could have been managed at a level 3 trauma center, and 88% of patients who required a level 1 trauma center were identified correctly. In the testing set, there was an overtriage rate of 66%, whereas using the RFM, we decreased the overtriage rate to 42% (P < 0.001). There was an undertriage rate of 8.3%. The RFM predicted patient disposition with a sensitivity of 89%, specificity of 42%, negative predictive value of 92%, and positive predictive value of 34%. Although prospective validation is required, it appears that computer modeling potentially could be used to guide triage decisions, allowing both more accurate triage and more efficient use of the trauma system. Copyright © 2014 Elsevier Inc. All rights reserved.
Li, Zhan-Chao; Lai, Yan-Hua; Chen, Li-Li; Chen, Chao; Xie, Yun; Dai, Zong; Zou, Xiao-Yong
2013-04-05
In the post-genome era, one of the most important and challenging tasks is to identify the subcellular localizations of protein complexes, and further elucidate their functions in human health with applications to understand disease mechanisms, diagnosis and therapy. Although various experimental approaches have been developed and employed to identify the subcellular localizations of protein complexes, the laboratory technologies fall far behind the rapid accumulation of protein complexes. Therefore, it is highly desirable to develop a computational method to rapidly and reliably identify the subcellular localizations of protein complexes. In this study, a novel method is proposed for predicting subcellular localizations of mammalian protein complexes based on graph theory with a random forest algorithm. Protein complexes are modeled as weighted graphs containing nodes and edges, where nodes represent proteins, edges represent protein-protein interactions and weights are descriptors of protein primary structures. Some topological structure features are proposed and adopted to characterize protein complexes based on graph theory. Random forest is employed to construct a model and predict subcellular localizations of protein complexes. Accuracies on a training set by a 10-fold cross-validation test for predicting plasma membrane/membrane attached, cytoplasm and nucleus are 84.78%, 71.30%, and 82.00%, respectively. And accuracies for the independent test set are 81.31%, 69.95% and 81.00%, respectively. These high prediction accuracies exhibit the state-of-the-art performance of the current method. It is anticipated that the proposed method may become a useful high-throughput tool and plays a complementary role to the existing experimental techniques in identifying subcellular localizations of mammalian protein complexes. The source code of Matlab and the dataset can be obtained freely on request from the authors.
Heuristic-biased stochastic sampling
Bresina, J.L.
1996-12-31
This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show that with the proper bias function, it can be easy to outperform greedy search.
Hastie, David I.; Zeller, Tanja; Liquet, Benoit; Newcombe, Paul; Yengo, Loic; Wild, Philipp S.; Schillert, Arne; Ziegler, Andreas; Nielsen, Sune F.; Butterworth, Adam S.; Ho, Weang Kee; Castagné, Raphaële; Munzel, Thomas; Tregouet, David; Falchi, Mario; Cambien, François; Nordestgaard, Børge G.; Fumeron, Fredéric; Tybjærg-Hansen, Anne; Froguel, Philippe; Danesh, John; Petretto, Enrico; Blankenberg, Stefan; Tiret, Laurence; Richardson, Sylvia
2013-01-01
Genome-wide association studies (GWAS) yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phenotype GWAS where detection and interpretability of complex SNP(s)-trait(s) associations are complicated by complex Linkage Disequilibrium patterns between SNPs and correlation between traits. Here we propose a computationally efficient algorithm (GUESS) to explore complex genetic-association models and maximize genetic variant detection. We integrated our algorithm with a new Bayesian strategy for multi-phenotype analysis to identify the specific contribution of each SNP to different trait combinations and study genetic regulation of lipid metabolism in the Gutenberg Health Study (GHS). Despite the relatively small size of GHS (n = 3,175), when compared with the largest published meta-GWAS (n>100,000), GUESS recovered most of the major associations and was better at refining multi-trait associations than alternative methods. Amongst the new findings provided by GUESS, we revealed a strong association of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi-phenotype approaches, both Bayesian and non-Bayesian, in a simulation study that mimics real-case scenarios. We showed that our parallel implementation based on Graphics Processing Units outperforms alternative multi-phenotype methods. Beyond multivariate modelling of multi-phenotypes, our Bayesian model employs a flexible hierarchical prior structure for genetic effects that adapts to any correlation structure of the predictors and increases the power to identify associated variants. This
Numerical method for the stochastic projected Gross-Pitaevskii equation
NASA Astrophysics Data System (ADS)
Rooney, S. J.; Blakie, P. B.; Bradley, A. S.
2014-01-01
We present a method for solving the stochastic projected Gross-Pitaevskii equation (SPGPE) for a three-dimensional weakly interacting Bose gas in a harmonic-oscillator trapping potential. The SPGPE contains the challenge of both accurately evolving all modes in the low-energy classical region of the system, and evaluating terms from the number-conserving scattering reservoir process. We give an accurate and efficient procedure for evaluating the scattering terms using a Hermite-polynomial based spectral-Galerkin representation, which allows us to precisely implement the low-energy mode restriction. Stochastic integration is performed using the weak semi-implicit Euler method. We extensively characterize the accuracy of our method, finding a faster-than-expected rate of stochastic convergence. Physical consistency of the algorithm is demonstrated by considering thermalization of initially random states.
Numerical method for the stochastic projected Gross-Pitaevskii equation.
Rooney, S J; Blakie, P B; Bradley, A S
2014-01-01
We present a method for solving the stochastic projected Gross-Pitaevskii equation (SPGPE) for a three-dimensional weakly interacting Bose gas in a harmonic-oscillator trapping potential. The SPGPE contains the challenge of both accurately evolving all modes in the low-energy classical region of the system, and evaluating terms from the number-conserving scattering reservoir process. We give an accurate and efficient procedure for evaluating the scattering terms using a Hermite-polynomial based spectral-Galerkin representation, which allows us to precisely implement the low-energy mode restriction. Stochastic integration is performed using the weak semi-implicit Euler method. We extensively characterize the accuracy of our method, finding a faster-than-expected rate of stochastic convergence. Physical consistency of the algorithm is demonstrated by considering thermalization of initially random states.
NASA Astrophysics Data System (ADS)
Provost, Floriane; Hibert, Clément; Malet, Jean-Philippe; Stumpf, André; Doubre, Cécile
2016-04-01
Different studies have shown the presence of microseismic activity in soft-rock landslides. The seismic signals exhibit significantly different features in the time and frequency domains which allow their classification and interpretation. Most of the classes could be associated with different mechanisms of deformation occurring within and at the surface (e.g. rockfall, slide-quake, fissure opening, fluid circulation). However, some signals remain not fully understood and some classes contain few examples that prevent any interpretation. To move toward a more complete interpretation of the links between the dynamics of soft-rock landslides and the physical processes controlling their behaviour, a complete catalog of the endogeneous seismicity is needed. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forest classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied to the seismicity recorded at the Super-Sauze landslide between 2013 and 2015. We selected a dozen of seismic signal features that characterize precisely its spectral content (e.g. central frequency, spectrum width, energy in several frequency bands, spectrogram shape, spectrum local and global maxima
Wang, Yinyin; Wu, Gaolin; Deng, Lei; Tang, Zhuangsheng; Wang, Kaibo; Sun, Wenyi; Shangguan, Zhouping
2017-07-31
Grasslands are an important component of terrestrial ecosystems that play a crucial role in the carbon cycle and climate change. In this study, we collected aboveground biomass (AGB) data from 223 grassland quadrats distributed across the Loess Plateau from 2011 to 2013 and predicted the spatial distribution of the grassland AGB at a 100-m resolution from both meteorological station and remote sensing data (TM and MODIS) using a Random Forest (RF) algorithm. The results showed that the predicted grassland AGB on the Loess Plateau decreased from east to west. Vegetation indexes were positively correlated with grassland AGB, and the normalized difference vegetation index (NDVI) acquired from TM data was the most important predictive factor. Tussock and shrub tussock had the highest AGB, and desert steppe had the lowest. Rainfall higher than 400 m might have benefitted the grassland AGB. Compared with those obtained for the bagging, mboost and the support vector machine (SVM) models, higher values for the mean Pearson coefficient (R) and the symmetric index of agreement (λ) were obtained for the RF model, indicating that this RF model could reasonably estimate the grassland AGB (65.01%) on the Loess Plateau.
Land cover classification using random forest with genetic algorithm-based parameter optimization
NASA Astrophysics Data System (ADS)
Ming, Dongping; Zhou, Tianning; Wang, Min; Tan, Tian
2016-07-01
Land cover classification based on remote sensing imagery is an important means to monitor, evaluate, and manage land resources. However, it requires robust classification methods that allow accurate mapping of complex land cover categories. Random forest (RF) is a powerful machine-learning classifier that can be used in land remote sensing. However, two important parameters of RF classification, namely, the number of trees and the number of variables tried at each split, affect classification accuracy. Thus, optimal parameter selection is an inevitable problem in RF-based image classification. This study uses the genetic algorithm (GA) to optimize the two parameters of RF to produce optimal land cover classification accuracy. HJ-1B CCD2 image data are used to classify six different land cover categories in Changping, Beijing, China. Experimental results show that GA-RF can avoid arbitrariness in the selection of parameters. The experiments also compare land cover classification results by using GA-RF method, traditional RF method (with default parameters), and support vector machine method. When the GA-RF method is used, classification accuracies, respectively, improved by 1.02% and 6.64%. The comparison results show that GA-RF is a feasible solution for land cover classification without compromising accuracy or incurring excessive time.
Exact stochastic simulation of coupled chemical reactions with delays
NASA Astrophysics Data System (ADS)
Cai, Xiaodong
2007-03-01
Gillespie's exact stochastic simulation algorithm (SSA) [J. Phys. Chem. 81, 2350 (1977)] has been widely used to simulate the stochastic dynamics of chemically reacting systems. In this algorithm, it is assumed that all reactions occur instantly. While this is true in many cases, it is also possible that some chemical reactions, such as gene transcription and translation in living cells, take certain time to finish after they are initiated. Thus, the product of such reactions will emerge after certain delays. Apparently, Gillespie's SSA is not an exact algorithm for chemical reaction systems with delays. In this paper, the author develops an exact SSA for chemical reaction systems with delays, based upon the same fundamental premise of stochastic kinetics used by Gillespie in the development of his SSA. He then shows that an algorithm modified from Gillespie's SSA by Barrio et al. [PLOS Comput. Biol. 2, 1017 (2006)] is also an exact SSA for chemical reaction systems with delays, but it needs to generate more random variables than the author's algorithm.
Reyes-Lamothe, Rodrigo; Tran, Tung; Meas, Diane; Lee, Laura; Li, Alice M; Sherratt, David J; Tolmasky, Marcelo E
2014-01-01
Bacterial plasmids play important roles in the metabolism, pathogenesis and bacterial evolution and are highly versatile biotechnological tools. Stable inheritance of plasmids depends on their autonomous replication and efficient partition to daughter cells at cell division. Active partition systems have not been identified for high-copy number plasmids, and it has been generally believed that they are partitioned randomly at cell division. Nevertheless, direct evidence for the cellular location of replicating and nonreplicating plasmids, and the partition mechanism has been lacking. We used as model pJHCMW1, a plasmid isolated from Klebsiella pneumoniae that includes two β-lactamase and two aminoglycoside resistance genes. Here we report that individual ColE1-type plasmid molecules are mobile and tend to be excluded from the nucleoid, mainly localizing at the cell poles but occasionally moving between poles along the long axis of the cell. As a consequence, at the moment of cell division, most plasmid molecules are located at the poles, resulting in efficient random partition to the daughter cells. Complete replication of individual molecules occurred stochastically and independently in the nucleoid-free space throughout the cell cycle, with a constant probability of initiation per plasmid.
Holmquist, R
1975-03-24
Eight proteins of diverse lengths, functions, and origin, are examined for compositional non-randomness amino acid by amino acid. The proteins investigated are human fibrinopeptide A, guinea pig Insulin, rattlesnake cytochrome c, MS2 phage coat protein, rabbit triosephosphate isomerase, bovine pancreatic deoxyribonuclease A, bovine glutamate dehydrogenase, and Bacillus thermoproteolyticus thermolysin. As a result of this study the experimentally testable hypothesis is put forth that for a large class of proteins the ratio of that fraction of the molecule which exhibits compositional non-randomness to that fraction which does not is on the average, stable about a mean value (estimated as 0.32 plus or minus 0.17) and (nearly) independent of protein length. Stochastic and selective evolutionary forces are viewed as interacting rather than independent phenomena. With respect to amino acid composition, this coupling ameliorates the current controversy over Darwinian vs. non-Darwinian evolution, selectionist vs. neutralist, in favor of neither: Within the context of the quantitative data, the evolution of real proteins is seen as a compromise between the two viewpoints, both important. The compositional fluctuations of the electrically charged amino acids glutamic and aspartic acid, lysine and arginine, are examined in depth for over eighty protein families, both prokaryotic and eukaryotic. For both taxa, each of the acidic amino acids is present in amounts roughly twice that predicted from the genetic code. The presence of an excess of glutamic acid is independent of the presence of an excess of aspartic acid and vice versa.
Yale, Jean-François; Berard, Lori; Groleau, Mélanie; Javadi, Pasha; Stewart, John; Harris, Stewart B
2017-10-01
It was uncertain whether an algorithm that involves increasing insulin dosages by 1 unit/day may cause more hypoglycemia with the longer-acting insulin glargine 300 units/mL (GLA-300). The objective of this study was to compare safety and efficacy of 2 titration algorithms, INSIGHT and EDITION, for GLA-300 in people with uncontrolled type 2 diabetes mellitus, mainly in a primary care setting. This was a 12-week, open-label, randomized, multicentre pilot study. Participants were randomly assigned to 1 of 2 algorithms: they either increased their dosage by 1 unit/day (INSIGHT, n=108) or the dose was adjusted by the investigator at least once weekly, but no more often than every 3 days (EDITION, n=104). The target fasting self-monitored blood glucose was in the range of 4.4 to 5.6 mmol/L. The percentages of participants reaching the primary endpoint of fasting self-monitored blood glucose ≤5.6 mmol/L without nocturnal hypoglycemia were 19.4% (INSIGHT) and 18.3% (EDITION). At week 12, 26.9% (INSIGHT) and 28.8% (EDITION) of participants achieved a glycated hemoglobin value of ≤7%. No differences in the incidence of hypoglycemia of any category were noted between algorithms. Participants in both arms of the study were much more satisfied with their new treatment as assessed by the Diabetes Treatment Satisfaction Questionnaire. Most health-care professionals (86%) preferred the INSIGHT over the EDITION algorithm. The frequency of adverse events was similar between algorithms. A patient-driven titration algorithm of 1 unit/day with GLA-300 is effective and comparable to the previously tested EDITION algorithm and is preferred by health-care professionals. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.
Stevenson, Gordon N; Collins, Sally L; Ding, Jane; Impey, Lawrence; Noble, J Alison
2015-12-01
Volumetric segmentation of the placenta using 3-D ultrasound is currently performed clinically to investigate correlation between organ volume and fetal outcome or pathology. Previously, interpolative or semi-automatic contour-based methodologies were used to provide volumetric results. We describe the validation of an original random walker (RW)-based algorithm against manual segmentation and an existing semi-automated method, virtual organ computer-aided analysis (VOCAL), using initialization time, inter- and intra-observer variability of volumetric measurements and quantification accuracy (with respect to manual segmentation) as metrics of success. Both semi-automatic methods require initialization. Therefore, the first experiment compared initialization times. Initialization was timed by one observer using 20 subjects. This revealed significant differences (p < 0.001) in time taken to initialize the VOCAL method compared with the RW method. In the second experiment, 10 subjects were used to analyze intra-/inter-observer variability between two observers. Bland-Altman plots were used to analyze variability combined with intra- and inter-observer variability measured by intra-class correlation coefficients, which were reported for all three methods. Intra-class correlation coefficient values for intra-observer variability were higher for the RW method than for VOCAL, and both were similar to manual segmentation. Inter-observer variability was 0.94 (0.88, 0.97), 0.91 (0.81, 0.95) and 0.80 (0.61, 0.90) for manual, RW and VOCAL, respectively. Finally, a third observer with no prior ultrasound experience was introduced and volumetric differences from manual segmentation were reported. Dice similarity coefficients for observers 1, 2 and 3 were respectively 0.84 ± 0.12, 0.94 ± 0.08 and 0.84 ± 0.11, and the mean was 0.87 ± 0.13. The RW algorithm was found to provide results concordant with those for manual segmentation and to outperform VOCAL in aspects of observer
Predicting Solar Flares Using SDO/HMI Vector Magnetic Data Product and Random Forest Algorithm
NASA Astrophysics Data System (ADS)
Liu, Chang; Deng, Na; Wang, Jason; Wang, Haimin
2017-08-01
Adverse space weather effects can often be traced to solar flares, prediction of which has drawn significant research interests. Many previous forecasting studies used physical parameters derived from photospheric line-of-sight field or ground-based vector field observations. The Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory produces full-disk vector magnetograms with continuous high-cadence, while flare prediction efforts utilizing this unprecedented data source are still limited. Here we report results of flare prediction using physical parameters provided by the Space-weather HMI Active Region Patches (SHARP) and related data products. We survey X-ray flares occurred from 2010 May to 2016 December, and categorize their source regions into four classes (B, C, M, and X) according to the maximum GOES magnitude of flares they generated. We then retrieve SHARP related parameters for each selected region at the beginning of its flare date to build a database. Finally, we train a machine-learning algorithm, called random forest (RF), to predict the occurrence of a certain class of flares in a given active region within 24 hours, evaluate the classifier performance using the 10-fold cross validation scheme, and characterize the results using standard performace metrics. Compared to previous works, our experiments indicate that using the HMI parameters and RF is a valid method for flare forecasting with fairly reasonable prediction performance. We also find that the total unsigned quantities of vertical current, current helicity, and flux near polarity inversion line are among the most important parameters for classifying flaring regions into different classes.
Sullivan, Shannon D; Downs, Erin; Popoveniuc, Geanina; Zeymo, Alexander; Jonklaas, Jacqueline; Burman, Kenneth D
2017-09-01
Regulation of maternal thyroid hormones during pregnancy is crucial for optimal maternal and fetal outcomes. There are no specific guidelines addressing maternal levothyroxine (LT4) dose adjustments throughout pregnancy. To compare two LT4 dose-adjustment algorithms in hypothyroid pregnant women. Thirty-three women on stable LT4 doses were recruited at <10 weeks gestation during 38 pregnancies and randomized to one of two dose-adjustment groups. Group 1 (G1) used an empiric two-pill/week dose increase followed by subsequent pill-per-week dose adjustments. In group 2 (G2), LT4 dose was adjusted in an ongoing approach in micrograms per day based on current thyroid stimulating hormone (TSH) level and LT4 dose. TSH was monitored every 2 weeks in trimesters 1 and 2 and every 4 weeks in trimester 3. Academic endocrinology clinics in Washington, DC. Proportion of TSH values within trimester-specific goal ranges. Mean gestational age at study entry was 6.4 ± 2.1 weeks. Seventy-five percent of TSH values were within trimester-specific goal ranges in G1 compared with 81% in G2 (P = 0.09). Similar numbers of LT4 dose adjustments per pregnancy were required in both groups (G1, 3.1 ± 2.0 vs G2, 4.1 ± 3.2; P = 0.27). Women in G1 were more likely to have suppressed TSH <0.1 mIU/L in trimester 1 (P = 0.01). Etiology of hypothyroidism, but not thyroid antibody status, was associated with proportion of goal TSH values. We compared two options for LT4 dose adjustment and showed that an ongoing adjustment approach is as effective as empiric dose increase for maintaining goal TSH in hypothyroid women during pregnancy.
Predicting Solar Flares Using SDO/HMI Vector Magnetic Data Products and the Random Forest Algorithm
NASA Astrophysics Data System (ADS)
Liu, Chang; Deng, Na; Wang, Jason T. L.; Wang, Haimin
2017-07-01
Adverse space-weather effects can often be traced to solar flares, the prediction of which has drawn significant research interests. The Helioseismic and Magnetic Imager (HMI) produces full-disk vector magnetograms with continuous high cadence, while flare prediction efforts utilizing this unprecedented data source are still limited. Here we report results of flare prediction using physical parameters provided by the Space-weather HMI Active Region Patches (SHARP) and related data products. We survey X-ray flares that occurred from 2010 May to 2016 December and categorize their source regions into four classes (B, C, M, and X) according to the maximum GOES magnitude of flares they generated. We then retrieve SHARP-related parameters for each selected region at the beginning of its flare date to build a database. Finally, we train a machine-learning algorithm, called random forest (RF), to predict the occurrence of a certain class of flares in a given active region within 24 hr, evaluate the classifier performance using the 10-fold cross-validation scheme, and characterize the results using standard performance metrics. Compared to previous works, our experiments indicate that using the HMI parameters and RF is a valid method for flare forecasting with fairly reasonable prediction performance. To our knowledge, this is the first time that RF has been used to make multiclass predictions of solar flares. We also find that the total unsigned quantities of vertical current, current helicity, and flux near the polarity inversion line are among the most important parameters for classifying flaring regions into different classes.
Evolution with Stochastic Fitness and Stochastic Migration
Rice, Sean H.; Papadopoulos, Anthony
2009-01-01
Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory
Evolution with stochastic fitness and stochastic migration.
Rice, Sean H; Papadopoulos, Anthony
2009-10-09
Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory.